Feb 28 03:35:44 crc systemd[1]: Starting Kubernetes Kubelet... Feb 28 03:35:44 crc restorecon[4572]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:44 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 03:35:45 crc restorecon[4572]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 28 03:35:45 crc kubenswrapper[4624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 03:35:45 crc kubenswrapper[4624]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 28 03:35:45 crc kubenswrapper[4624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 03:35:45 crc kubenswrapper[4624]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 03:35:45 crc kubenswrapper[4624]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 28 03:35:45 crc kubenswrapper[4624]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.783564 4624 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788586 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788619 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788630 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788639 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788648 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788656 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788666 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788675 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788685 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788692 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788700 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788708 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788716 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788725 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788732 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788740 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788748 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788757 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788765 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788772 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788780 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788787 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788795 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788803 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788818 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788826 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788834 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788841 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788852 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788861 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788869 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788878 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788889 4624 feature_gate.go:330] unrecognized feature gate: Example Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788897 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788905 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788913 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788921 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788928 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788936 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788944 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788954 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788962 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788970 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788977 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788985 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.788993 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789000 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789008 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789015 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789023 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789031 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789039 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789049 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789058 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789068 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789078 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789111 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789119 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789128 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789138 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789146 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789154 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789165 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789175 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789185 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789193 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789200 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789209 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789217 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789225 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.789235 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790212 4624 flags.go:64] FLAG: --address="0.0.0.0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790237 4624 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790251 4624 flags.go:64] FLAG: --anonymous-auth="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790263 4624 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790274 4624 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790283 4624 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790294 4624 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790305 4624 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790315 4624 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790325 4624 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790334 4624 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790344 4624 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790354 4624 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790364 4624 flags.go:64] FLAG: --cgroup-root="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790373 4624 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790383 4624 flags.go:64] FLAG: --client-ca-file="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790392 4624 flags.go:64] FLAG: --cloud-config="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790401 4624 flags.go:64] FLAG: --cloud-provider="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.790410 4624 flags.go:64] FLAG: --cluster-dns="[]" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793286 4624 flags.go:64] FLAG: --cluster-domain="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793298 4624 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793309 4624 flags.go:64] FLAG: --config-dir="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793317 4624 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793327 4624 flags.go:64] FLAG: --container-log-max-files="5" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793343 4624 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793353 4624 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793362 4624 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793371 4624 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793380 4624 flags.go:64] FLAG: --contention-profiling="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793389 4624 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793398 4624 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793409 4624 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793417 4624 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793429 4624 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793438 4624 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793447 4624 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793456 4624 flags.go:64] FLAG: --enable-load-reader="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793468 4624 flags.go:64] FLAG: --enable-server="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793477 4624 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793488 4624 flags.go:64] FLAG: --event-burst="100" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793498 4624 flags.go:64] FLAG: --event-qps="50" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793507 4624 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793516 4624 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793525 4624 flags.go:64] FLAG: --eviction-hard="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793562 4624 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793572 4624 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793581 4624 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793590 4624 flags.go:64] FLAG: --eviction-soft="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793599 4624 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793611 4624 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793620 4624 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793629 4624 flags.go:64] FLAG: --experimental-mounter-path="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793638 4624 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793647 4624 flags.go:64] FLAG: --fail-swap-on="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793656 4624 flags.go:64] FLAG: --feature-gates="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793666 4624 flags.go:64] FLAG: --file-check-frequency="20s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793675 4624 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793685 4624 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793695 4624 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793705 4624 flags.go:64] FLAG: --healthz-port="10248" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793714 4624 flags.go:64] FLAG: --help="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793723 4624 flags.go:64] FLAG: --hostname-override="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793732 4624 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793741 4624 flags.go:64] FLAG: --http-check-frequency="20s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793772 4624 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793782 4624 flags.go:64] FLAG: --image-credential-provider-config="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793791 4624 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793800 4624 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793809 4624 flags.go:64] FLAG: --image-service-endpoint="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793818 4624 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793826 4624 flags.go:64] FLAG: --kube-api-burst="100" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793835 4624 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793845 4624 flags.go:64] FLAG: --kube-api-qps="50" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793855 4624 flags.go:64] FLAG: --kube-reserved="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793863 4624 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793873 4624 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793882 4624 flags.go:64] FLAG: --kubelet-cgroups="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793891 4624 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793900 4624 flags.go:64] FLAG: --lock-file="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793908 4624 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793917 4624 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793926 4624 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793939 4624 flags.go:64] FLAG: --log-json-split-stream="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793950 4624 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793958 4624 flags.go:64] FLAG: --log-text-split-stream="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793967 4624 flags.go:64] FLAG: --logging-format="text" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793976 4624 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793985 4624 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.793994 4624 flags.go:64] FLAG: --manifest-url="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794003 4624 flags.go:64] FLAG: --manifest-url-header="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794014 4624 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794023 4624 flags.go:64] FLAG: --max-open-files="1000000" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794033 4624 flags.go:64] FLAG: --max-pods="110" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794042 4624 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794051 4624 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794060 4624 flags.go:64] FLAG: --memory-manager-policy="None" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794068 4624 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794078 4624 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794113 4624 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794123 4624 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794142 4624 flags.go:64] FLAG: --node-status-max-images="50" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794151 4624 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794160 4624 flags.go:64] FLAG: --oom-score-adj="-999" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794169 4624 flags.go:64] FLAG: --pod-cidr="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794179 4624 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794192 4624 flags.go:64] FLAG: --pod-manifest-path="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794200 4624 flags.go:64] FLAG: --pod-max-pids="-1" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794210 4624 flags.go:64] FLAG: --pods-per-core="0" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794219 4624 flags.go:64] FLAG: --port="10250" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794229 4624 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794238 4624 flags.go:64] FLAG: --provider-id="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794246 4624 flags.go:64] FLAG: --qos-reserved="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794255 4624 flags.go:64] FLAG: --read-only-port="10255" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794264 4624 flags.go:64] FLAG: --register-node="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794273 4624 flags.go:64] FLAG: --register-schedulable="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794282 4624 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794296 4624 flags.go:64] FLAG: --registry-burst="10" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794305 4624 flags.go:64] FLAG: --registry-qps="5" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794314 4624 flags.go:64] FLAG: --reserved-cpus="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794323 4624 flags.go:64] FLAG: --reserved-memory="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794333 4624 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794342 4624 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794351 4624 flags.go:64] FLAG: --rotate-certificates="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794360 4624 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794369 4624 flags.go:64] FLAG: --runonce="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794377 4624 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794386 4624 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794395 4624 flags.go:64] FLAG: --seccomp-default="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794403 4624 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794413 4624 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794422 4624 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794431 4624 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794440 4624 flags.go:64] FLAG: --storage-driver-password="root" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794449 4624 flags.go:64] FLAG: --storage-driver-secure="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794459 4624 flags.go:64] FLAG: --storage-driver-table="stats" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794469 4624 flags.go:64] FLAG: --storage-driver-user="root" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794477 4624 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794486 4624 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794495 4624 flags.go:64] FLAG: --system-cgroups="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794503 4624 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794517 4624 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794526 4624 flags.go:64] FLAG: --tls-cert-file="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794534 4624 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794545 4624 flags.go:64] FLAG: --tls-min-version="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794554 4624 flags.go:64] FLAG: --tls-private-key-file="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794563 4624 flags.go:64] FLAG: --topology-manager-policy="none" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794572 4624 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794581 4624 flags.go:64] FLAG: --topology-manager-scope="container" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794590 4624 flags.go:64] FLAG: --v="2" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794602 4624 flags.go:64] FLAG: --version="false" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794613 4624 flags.go:64] FLAG: --vmodule="" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794623 4624 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.794633 4624 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794827 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794837 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794846 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794856 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794868 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794879 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794887 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794897 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794906 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794914 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794923 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794931 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794940 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794951 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794959 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794967 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794975 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794983 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794991 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.794998 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795006 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795013 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795021 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795029 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795036 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795044 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795052 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795060 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795068 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795077 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795109 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795117 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795124 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795132 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795142 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795152 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795160 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795168 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795176 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795185 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795192 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795200 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795207 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795216 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795224 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795231 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795240 4624 feature_gate.go:330] unrecognized feature gate: Example Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795247 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795255 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795264 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795271 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795279 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795287 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795294 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795302 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795310 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795317 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795327 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795335 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795343 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795351 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795359 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795367 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795374 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795383 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795391 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795399 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795407 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795417 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795427 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.795435 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.795458 4624 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.805658 4624 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.805717 4624 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805841 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805856 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805865 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805875 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805884 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805892 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805900 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805908 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805917 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805926 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805936 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805944 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805953 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805961 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805970 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805978 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805985 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.805993 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806001 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806012 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806025 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806033 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806042 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806052 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806063 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806071 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806080 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806116 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806124 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806133 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806140 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806148 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806156 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806165 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806177 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806187 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806196 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806204 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806211 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806219 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806229 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806239 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806248 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806257 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806269 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806278 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806286 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806295 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806302 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806310 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806318 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806326 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806333 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806341 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806349 4624 feature_gate.go:330] unrecognized feature gate: Example Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806357 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806364 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806372 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806381 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806389 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806396 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806404 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806412 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806420 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806428 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806436 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806444 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806452 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806460 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806467 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806476 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.806491 4624 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806711 4624 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806725 4624 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806734 4624 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806743 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806754 4624 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806762 4624 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806771 4624 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806780 4624 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806788 4624 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806797 4624 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806805 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806814 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806821 4624 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806829 4624 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806837 4624 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806845 4624 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806853 4624 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806861 4624 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806869 4624 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806877 4624 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806884 4624 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806892 4624 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806901 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806908 4624 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806917 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806925 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806934 4624 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806941 4624 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806950 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806957 4624 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806966 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806974 4624 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806982 4624 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.806991 4624 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807002 4624 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807011 4624 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807019 4624 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807029 4624 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807039 4624 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807048 4624 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807057 4624 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807066 4624 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807074 4624 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807104 4624 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807115 4624 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807125 4624 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807134 4624 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807143 4624 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807152 4624 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807161 4624 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807170 4624 feature_gate.go:330] unrecognized feature gate: Example Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807178 4624 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807186 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807194 4624 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807203 4624 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807210 4624 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807218 4624 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807226 4624 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807234 4624 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807242 4624 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807252 4624 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807263 4624 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807272 4624 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807281 4624 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807289 4624 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807298 4624 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807306 4624 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807315 4624 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807323 4624 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807331 4624 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.807340 4624 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.807352 4624 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.807629 4624 server.go:940] "Client rotation is on, will bootstrap in background" Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.812472 4624 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.817489 4624 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.817669 4624 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.821999 4624 server.go:997] "Starting client certificate rotation" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.822044 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.822366 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.854080 4624 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.858943 4624 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.860776 4624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.879206 4624 log.go:25] "Validated CRI v1 runtime API" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.925577 4624 log.go:25] "Validated CRI v1 image API" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.928566 4624 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.935020 4624 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-28-03-30-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.935136 4624 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.960614 4624 manager.go:217] Machine: {Timestamp:2026-02-28 03:35:45.957659356 +0000 UTC m=+0.621698755 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b8a128da-abb3-432c-b92e-2d237967b814 BootID:2093f8b2-b9e5-423f-9f72-24050fa8f25c Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:61:9a:8b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:61:9a:8b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cd:d2:89 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:22:17:7b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d5:ed:63 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e2:b3:bb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ea:6b:ef:3a:29:04 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:3f:0f:27:a3:96 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.960958 4624 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.961226 4624 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.963058 4624 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.963482 4624 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.963545 4624 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.963905 4624 topology_manager.go:138] "Creating topology manager with none policy" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.963925 4624 container_manager_linux.go:303] "Creating device plugin manager" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.964592 4624 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.964646 4624 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.964948 4624 state_mem.go:36] "Initialized new in-memory state store" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.965126 4624 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.968726 4624 kubelet.go:418] "Attempting to sync node with API server" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.968763 4624 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.968805 4624 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.968827 4624 kubelet.go:324] "Adding apiserver pod source" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.968845 4624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.973856 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.973942 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.973952 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.974121 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.975167 4624 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.976449 4624 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.979695 4624 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.982696 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983183 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983256 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983276 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983301 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983317 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983340 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983381 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983398 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983412 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983479 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.983494 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.984590 4624 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.985516 4624 server.go:1280] "Started kubelet" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.985836 4624 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.985939 4624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.986876 4624 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.987723 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:45 crc systemd[1]: Started Kubernetes Kubelet. Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.992421 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.992614 4624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.993889 4624 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.994066 4624 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.995230 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.995430 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Feb 28 03:35:45 crc kubenswrapper[4624]: W0228 03:35:45.995737 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:45 crc kubenswrapper[4624]: E0228 03:35:45.995801 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.994190 4624 server.go:460] "Adding debug handlers to kubelet server" Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.995926 4624 factory.go:55] Registering systemd factory Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.995961 4624 factory.go:221] Registration of the systemd container factory successfully Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.997626 4624 factory.go:153] Registering CRI-O factory Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.997711 4624 factory.go:221] Registration of the crio container factory successfully Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.998079 4624 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.998163 4624 factory.go:103] Registering Raw factory Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.998198 4624 manager.go:1196] Started watching for new ooms in manager Feb 28 03:35:45 crc kubenswrapper[4624]: I0228 03:35:45.999571 4624 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.000397 4624 manager.go:319] Starting recovery of all containers Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.007539 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18984bc112bd6c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,LastTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.027941 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028112 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028145 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028170 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028203 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028224 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028258 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028280 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028313 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028335 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028355 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028416 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028438 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028472 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028491 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028516 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028537 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028561 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028588 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028607 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028634 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028663 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028684 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028710 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028730 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028756 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028782 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028813 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028844 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028865 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028888 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028913 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028934 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028964 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.028987 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029008 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029032 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029052 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029071 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029123 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029147 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029175 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029194 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029213 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029237 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029258 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029284 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029305 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029325 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029361 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029390 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029477 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029725 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029813 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029880 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029925 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.029970 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030006 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030047 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030078 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030202 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030224 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030300 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030337 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030364 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030383 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030404 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030429 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030448 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030967 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.030994 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.031030 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.031072 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.034294 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.034857 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.034945 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035021 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035162 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035253 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035367 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035648 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035729 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035809 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035875 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.035956 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036030 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036137 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036205 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036278 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036358 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036446 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036520 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036616 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036718 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036804 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036872 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.036949 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037020 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037101 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037178 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037278 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037355 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037427 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037507 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037587 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037709 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037798 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037876 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.037955 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038025 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038108 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038202 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038287 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038356 4624 manager.go:324] Recovery completed Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038363 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038498 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038540 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038562 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038580 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038606 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038628 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038650 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038669 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038687 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.038709 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041069 4624 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041138 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041164 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041187 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041208 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041230 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041250 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041271 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041291 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041311 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041333 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041354 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041375 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041395 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041416 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041439 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041461 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041499 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041522 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041545 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041565 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041583 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041601 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041620 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041637 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041656 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041700 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041805 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041835 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041857 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041881 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041903 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041928 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041953 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.041974 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042000 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042036 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042059 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042139 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042166 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042188 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042210 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042231 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042256 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042279 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042302 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042327 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042350 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042373 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042397 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042419 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042444 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042467 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042489 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042511 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042609 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042635 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042659 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042682 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042706 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042730 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042753 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042775 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042796 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042843 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042864 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042965 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.042989 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043064 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043127 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043153 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043177 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043199 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043222 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043244 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043266 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043289 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043338 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043384 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043407 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043428 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043450 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043471 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043495 4624 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043515 4624 reconstruct.go:97] "Volume reconstruction finished" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.043529 4624 reconciler.go:26] "Reconciler: start to sync state" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.050206 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.052634 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.052675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.052688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.053943 4624 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.053964 4624 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.053985 4624 state_mem.go:36] "Initialized new in-memory state store" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.076249 4624 policy_none.go:49] "None policy: Start" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.077821 4624 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.077873 4624 state_mem.go:35] "Initializing new in-memory state store" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.082340 4624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.085737 4624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.085805 4624 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.085848 4624 kubelet.go:2335] "Starting kubelet main sync loop" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.085999 4624 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.086804 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.086866 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.095405 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.142520 4624 manager.go:334] "Starting Device Plugin manager" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.142812 4624 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.142835 4624 server.go:79] "Starting device plugin registration server" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.143376 4624 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.143400 4624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.143586 4624 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.143868 4624 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.143900 4624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.157166 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.186719 4624 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.186849 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.188176 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.188222 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.188240 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.188414 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.188741 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.188842 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189428 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189479 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189657 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189793 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189852 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.189964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.190721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.190748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.190759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.190852 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.190963 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.190994 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192100 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192174 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192187 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192440 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192540 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192587 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192558 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192635 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.192604 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193393 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193427 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193456 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193484 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193487 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193553 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193826 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.193842 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.194047 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.194108 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.195022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.195075 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.195108 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.196050 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.244096 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245348 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245413 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245447 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245477 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245510 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245537 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245568 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245659 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245781 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245880 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245929 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.245970 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.246005 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.246037 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.250030 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.250117 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.250135 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.250179 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.250844 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347500 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347569 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347596 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347616 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347639 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347666 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347756 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347829 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347856 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347875 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347898 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347918 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347937 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347956 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.347975 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348478 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348579 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348622 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348630 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348663 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348701 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348692 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348770 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348760 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348806 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348825 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.348808 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.349047 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.451858 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.454293 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.454342 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.454359 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.454397 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.454955 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.515313 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.520229 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.540604 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.558273 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.562649 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.576004 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a2719a20300687c36b3cc000f62de61dd322dcf084b37b45c519ac552fff8463 WatchSource:0}: Error finding container a2719a20300687c36b3cc000f62de61dd322dcf084b37b45c519ac552fff8463: Status 404 returned error can't find the container with id a2719a20300687c36b3cc000f62de61dd322dcf084b37b45c519ac552fff8463 Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.578268 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0900d28738a1b78015fefbf5a8045117396cc149042ed1a3c9eeb7b9353ec108 WatchSource:0}: Error finding container 0900d28738a1b78015fefbf5a8045117396cc149042ed1a3c9eeb7b9353ec108: Status 404 returned error can't find the container with id 0900d28738a1b78015fefbf5a8045117396cc149042ed1a3c9eeb7b9353ec108 Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.589908 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3d97c551c01c2ae373d3a02d6571df5bc2c278e93b60cb3a2a937488ab46402f WatchSource:0}: Error finding container 3d97c551c01c2ae373d3a02d6571df5bc2c278e93b60cb3a2a937488ab46402f: Status 404 returned error can't find the container with id 3d97c551c01c2ae373d3a02d6571df5bc2c278e93b60cb3a2a937488ab46402f Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.596050 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c5dc5519a11506e2fbc290d9441cfdc6108ebe021dd7e4044cc12192d32794d4 WatchSource:0}: Error finding container c5dc5519a11506e2fbc290d9441cfdc6108ebe021dd7e4044cc12192d32794d4: Status 404 returned error can't find the container with id c5dc5519a11506e2fbc290d9441cfdc6108ebe021dd7e4044cc12192d32794d4 Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.596875 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.604965 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9bbdaa7ab52e7f9d6baa45cab32e11b9de5437e4cebbfeb36caaf876a07ec210 WatchSource:0}: Error finding container 9bbdaa7ab52e7f9d6baa45cab32e11b9de5437e4cebbfeb36caaf876a07ec210: Status 404 returned error can't find the container with id 9bbdaa7ab52e7f9d6baa45cab32e11b9de5437e4cebbfeb36caaf876a07ec210 Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.855816 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.857814 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.857857 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.857870 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.857900 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.858348 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Feb 28 03:35:46 crc kubenswrapper[4624]: I0228 03:35:46.989241 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:46 crc kubenswrapper[4624]: W0228 03:35:46.993545 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:46 crc kubenswrapper[4624]: E0228 03:35:46.993686 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:47 crc kubenswrapper[4624]: W0228 03:35:47.070788 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.070929 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.092603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9bbdaa7ab52e7f9d6baa45cab32e11b9de5437e4cebbfeb36caaf876a07ec210"} Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.094342 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c5dc5519a11506e2fbc290d9441cfdc6108ebe021dd7e4044cc12192d32794d4"} Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.095807 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d97c551c01c2ae373d3a02d6571df5bc2c278e93b60cb3a2a937488ab46402f"} Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.097610 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0900d28738a1b78015fefbf5a8045117396cc149042ed1a3c9eeb7b9353ec108"} Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.098779 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a2719a20300687c36b3cc000f62de61dd322dcf084b37b45c519ac552fff8463"} Feb 28 03:35:47 crc kubenswrapper[4624]: W0228 03:35:47.166869 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.166991 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:47 crc kubenswrapper[4624]: W0228 03:35:47.267971 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.268063 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.398677 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.659427 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.661024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.661141 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.661174 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.661225 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.662029 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.855037 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18984bc112bd6c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,LastTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.972635 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 03:35:47 crc kubenswrapper[4624]: E0228 03:35:47.973740 4624 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Feb 28 03:35:47 crc kubenswrapper[4624]: I0228 03:35:47.988796 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.106383 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8a49b5101d13100cf99239b5636cfd4c9d7b80d6f0ef04f7a8f266cc680ba35b"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.106440 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.106463 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"670e6738b4331e619579527aedac8536a810d80eb541018d8bd4c310903d601e"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.106486 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f00c6464ab3eee8445a5fc9ad8b4fe08696b0cb42e3fa14744de2cdd09f02009"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.106504 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff98f82a375ad434c6c79d25921178c941462da96f5d64d615b63202ebde82fc"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.107944 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.107980 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.107991 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.109741 4624 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a" exitCode=0 Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.109806 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.109930 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.111391 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.111451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.111473 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.113403 4624 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85" exitCode=0 Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.113440 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.113544 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.115180 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.115217 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.115239 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.116133 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601" exitCode=0 Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.116223 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.116299 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.117373 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.117419 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.117445 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.119877 4624 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="397a73dbda33b44e1ae453c8fd4c1f6aecda06c5edaf58d2fbcccfc9c87e9320" exitCode=0 Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.119919 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"397a73dbda33b44e1ae453c8fd4c1f6aecda06c5edaf58d2fbcccfc9c87e9320"} Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.119998 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.120130 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.120984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.121036 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.121056 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.121748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.121819 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.121847 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:48 crc kubenswrapper[4624]: I0228 03:35:48.989373 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Feb 28 03:35:49 crc kubenswrapper[4624]: E0228 03:35:49.000102 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.128668 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.128731 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.128745 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.128758 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.131866 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9493b63c05016bd1c74e76914b75c45b097fbf94070731f917ebc9e72174275"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.131915 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c89e4d6b99379b7235c9eb2abb3a776e8e821e9b04834c74d03272a9508acd45"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.131928 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8949a42009c96977551f27dfee9afbe8b5e02852e3c5319f95ca43a891a32f5d"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.131974 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.136566 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.136597 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.136608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.139074 4624 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a" exitCode=0 Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.139198 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.139239 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.141292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.141331 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.141345 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.143007 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.143027 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1"} Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.143014 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.145002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.145035 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.145045 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.145734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.145750 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.145758 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.262243 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.263899 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.263939 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.263950 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:49 crc kubenswrapper[4624]: I0228 03:35:49.263980 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:35:49 crc kubenswrapper[4624]: E0228 03:35:49.264604 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.007270 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.155964 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2336a73818a720ae0a07577891508e819edb862366f329a655aa045cfe60967"} Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.156243 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.158411 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.158497 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.158525 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.161236 4624 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62" exitCode=0 Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.161425 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.161471 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.161313 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62"} Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.161485 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.161473 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.162033 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163354 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163397 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163422 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163501 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163536 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163553 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163869 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163900 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163914 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163909 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.163981 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:50 crc kubenswrapper[4624]: I0228 03:35:50.631899 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.168597 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf"} Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.168694 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb"} Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.168715 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434"} Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.168718 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.168779 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.169585 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.169616 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.169630 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:51 crc kubenswrapper[4624]: I0228 03:35:51.992251 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.045420 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.177916 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.178315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134"} Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.178754 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376"} Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.178495 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.180461 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.180502 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.180523 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.180543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.180567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.180544 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.465481 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.467443 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.467505 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.467524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:52 crc kubenswrapper[4624]: I0228 03:35:52.467559 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.007278 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.007397 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.181697 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.181706 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.184208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.184264 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.184282 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.184356 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.184410 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.184434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.822777 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.823161 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.824929 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.824995 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.825010 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:53 crc kubenswrapper[4624]: I0228 03:35:53.833328 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.184156 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.185626 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.185733 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.185763 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.792045 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.792343 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.793837 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.793868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:54 crc kubenswrapper[4624]: I0228 03:35:54.793881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:55 crc kubenswrapper[4624]: I0228 03:35:55.525397 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:35:55 crc kubenswrapper[4624]: I0228 03:35:55.525702 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:55 crc kubenswrapper[4624]: I0228 03:35:55.527205 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:55 crc kubenswrapper[4624]: I0228 03:35:55.527251 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:55 crc kubenswrapper[4624]: I0228 03:35:55.527263 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.003584 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.003785 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.004794 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.004823 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.004832 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:56 crc kubenswrapper[4624]: E0228 03:35:56.158211 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.533698 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.534078 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.535884 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.535916 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.535928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:56 crc kubenswrapper[4624]: I0228 03:35:56.824455 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:57 crc kubenswrapper[4624]: I0228 03:35:57.193952 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:57 crc kubenswrapper[4624]: I0228 03:35:57.195682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:57 crc kubenswrapper[4624]: I0228 03:35:57.195739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:57 crc kubenswrapper[4624]: I0228 03:35:57.195759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:57 crc kubenswrapper[4624]: I0228 03:35:57.204551 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:35:58 crc kubenswrapper[4624]: I0228 03:35:58.197336 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:35:58 crc kubenswrapper[4624]: I0228 03:35:58.199183 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:35:58 crc kubenswrapper[4624]: I0228 03:35:58.199231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:35:58 crc kubenswrapper[4624]: I0228 03:35:58.199241 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:35:59 crc kubenswrapper[4624]: W0228 03:35:59.851839 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 03:35:59 crc kubenswrapper[4624]: I0228 03:35:59.851962 4624 trace.go:236] Trace[1261165326]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 03:35:49.849) (total time: 10002ms): Feb 28 03:35:59 crc kubenswrapper[4624]: Trace[1261165326]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:35:59.851) Feb 28 03:35:59 crc kubenswrapper[4624]: Trace[1261165326]: [10.002043819s] [10.002043819s] END Feb 28 03:35:59 crc kubenswrapper[4624]: E0228 03:35:59.851994 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 03:35:59 crc kubenswrapper[4624]: W0228 03:35:59.912124 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 03:35:59 crc kubenswrapper[4624]: I0228 03:35:59.912259 4624 trace.go:236] Trace[981533397]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 03:35:49.910) (total time: 10002ms): Feb 28 03:35:59 crc kubenswrapper[4624]: Trace[981533397]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:35:59.912) Feb 28 03:35:59 crc kubenswrapper[4624]: Trace[981533397]: [10.002063999s] [10.002063999s] END Feb 28 03:35:59 crc kubenswrapper[4624]: E0228 03:35:59.912291 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 03:35:59 crc kubenswrapper[4624]: I0228 03:35:59.989924 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 28 03:36:00 crc kubenswrapper[4624]: W0228 03:36:00.014364 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.014470 4624 trace.go:236] Trace[2128724868]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 03:35:50.012) (total time: 10001ms): Feb 28 03:36:00 crc kubenswrapper[4624]: Trace[2128724868]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (03:36:00.014) Feb 28 03:36:00 crc kubenswrapper[4624]: Trace[2128724868]: [10.001742999s] [10.001742999s] END Feb 28 03:36:00 crc kubenswrapper[4624]: E0228 03:36:00.014500 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.046552 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.046654 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.205577 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.208114 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2336a73818a720ae0a07577891508e819edb862366f329a655aa045cfe60967" exitCode=255 Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.208169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b2336a73818a720ae0a07577891508e819edb862366f329a655aa045cfe60967"} Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.208365 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.209456 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.209496 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.209511 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.210233 4624 scope.go:117] "RemoveContainer" containerID="b2336a73818a720ae0a07577891508e819edb862366f329a655aa045cfe60967" Feb 28 03:36:00 crc kubenswrapper[4624]: W0228 03:36:00.367305 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.367451 4624 trace.go:236] Trace[1579507944]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 03:35:50.365) (total time: 10002ms): Feb 28 03:36:00 crc kubenswrapper[4624]: Trace[1579507944]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (03:36:00.367) Feb 28 03:36:00 crc kubenswrapper[4624]: Trace[1579507944]: [10.002233883s] [10.002233883s] END Feb 28 03:36:00 crc kubenswrapper[4624]: E0228 03:36:00.367488 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 03:36:00 crc kubenswrapper[4624]: E0228 03:36:00.582670 4624 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:00 crc kubenswrapper[4624]: E0228 03:36:00.585408 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:00Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 28 03:36:00 crc kubenswrapper[4624]: E0228 03:36:00.585690 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:00Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18984bc112bd6c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,LastTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.590350 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.590402 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.596825 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.596860 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 28 03:36:00 crc kubenswrapper[4624]: E0228 03:36:00.600542 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:00Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.665281 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]log ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]etcd ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/priority-and-fairness-filter ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-apiextensions-informers ok Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-system-namespaces-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/start-kube-aggregator-informers ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 28 03:36:00 crc kubenswrapper[4624]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]autoregister-completion ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/apiservice-openapi-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 28 03:36:00 crc kubenswrapper[4624]: livez check failed Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.665375 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:36:00 crc kubenswrapper[4624]: I0228 03:36:00.995642 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:00Z is after 2026-02-23T05:33:13Z Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.212733 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.215700 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243"} Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.215919 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.218562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.218602 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.218615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:01 crc kubenswrapper[4624]: I0228 03:36:01.993514 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:01Z is after 2026-02-23T05:33:13Z Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.222337 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.223456 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.226323 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" exitCode=255 Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.226383 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243"} Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.226467 4624 scope.go:117] "RemoveContainer" containerID="b2336a73818a720ae0a07577891508e819edb862366f329a655aa045cfe60967" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.226706 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.228405 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.228446 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.228460 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.230172 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:02 crc kubenswrapper[4624]: E0228 03:36:02.230443 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:02 crc kubenswrapper[4624]: I0228 03:36:02.994150 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:02Z is after 2026-02-23T05:33:13Z Feb 28 03:36:03 crc kubenswrapper[4624]: I0228 03:36:03.007567 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:36:03 crc kubenswrapper[4624]: I0228 03:36:03.007820 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:36:03 crc kubenswrapper[4624]: I0228 03:36:03.232394 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 03:36:03 crc kubenswrapper[4624]: W0228 03:36:03.868913 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:03Z is after 2026-02-23T05:33:13Z Feb 28 03:36:03 crc kubenswrapper[4624]: E0228 03:36:03.869043 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:03 crc kubenswrapper[4624]: W0228 03:36:03.908951 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:03Z is after 2026-02-23T05:33:13Z Feb 28 03:36:03 crc kubenswrapper[4624]: E0228 03:36:03.909029 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:03 crc kubenswrapper[4624]: I0228 03:36:03.994332 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:03Z is after 2026-02-23T05:33:13Z Feb 28 03:36:04 crc kubenswrapper[4624]: W0228 03:36:04.500744 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:04Z is after 2026-02-23T05:33:13Z Feb 28 03:36:04 crc kubenswrapper[4624]: E0228 03:36:04.500852 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:04 crc kubenswrapper[4624]: W0228 03:36:04.600325 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:04Z is after 2026-02-23T05:33:13Z Feb 28 03:36:04 crc kubenswrapper[4624]: E0228 03:36:04.600481 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:04 crc kubenswrapper[4624]: I0228 03:36:04.996119 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:04Z is after 2026-02-23T05:33:13Z Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.123347 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.123630 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.125782 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.125851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.125873 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.126898 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:05 crc kubenswrapper[4624]: E0228 03:36:05.127227 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.641742 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.642727 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.645068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.645516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.645800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.647350 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:05 crc kubenswrapper[4624]: E0228 03:36:05.647965 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.651234 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:05 crc kubenswrapper[4624]: I0228 03:36:05.994214 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:05Z is after 2026-02-23T05:33:13Z Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.047380 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.048012 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.050127 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.050200 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.050228 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.073619 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 28 03:36:06 crc kubenswrapper[4624]: E0228 03:36:06.158846 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.245109 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.245209 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.246718 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.246759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.246774 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.246771 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.246938 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.246955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.247794 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:06 crc kubenswrapper[4624]: E0228 03:36:06.248008 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:06 crc kubenswrapper[4624]: E0228 03:36:06.992319 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:06Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 28 03:36:06 crc kubenswrapper[4624]: I0228 03:36:06.997137 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:06Z is after 2026-02-23T05:33:13Z Feb 28 03:36:07 crc kubenswrapper[4624]: I0228 03:36:07.000668 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:07 crc kubenswrapper[4624]: I0228 03:36:07.002834 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:07 crc kubenswrapper[4624]: I0228 03:36:07.002903 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:07 crc kubenswrapper[4624]: I0228 03:36:07.002928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:07 crc kubenswrapper[4624]: I0228 03:36:07.002989 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:07 crc kubenswrapper[4624]: E0228 03:36:07.008056 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:07Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 03:36:07 crc kubenswrapper[4624]: I0228 03:36:07.993191 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:07Z is after 2026-02-23T05:33:13Z Feb 28 03:36:08 crc kubenswrapper[4624]: I0228 03:36:08.782914 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 03:36:08 crc kubenswrapper[4624]: E0228 03:36:08.788778 4624 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:08Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:08 crc kubenswrapper[4624]: I0228 03:36:08.994344 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:08Z is after 2026-02-23T05:33:13Z Feb 28 03:36:09 crc kubenswrapper[4624]: I0228 03:36:09.992843 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:09Z is after 2026-02-23T05:33:13Z Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.045664 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.046026 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.048149 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.048213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.048229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.049105 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:10 crc kubenswrapper[4624]: E0228 03:36:10.049339 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:10 crc kubenswrapper[4624]: E0228 03:36:10.592214 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:10Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18984bc112bd6c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,LastTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:10 crc kubenswrapper[4624]: I0228 03:36:10.993747 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:10Z is after 2026-02-23T05:33:13Z Feb 28 03:36:11 crc kubenswrapper[4624]: W0228 03:36:11.122422 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:11Z is after 2026-02-23T05:33:13Z Feb 28 03:36:11 crc kubenswrapper[4624]: E0228 03:36:11.122547 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:11Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:11 crc kubenswrapper[4624]: I0228 03:36:11.992623 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:11Z is after 2026-02-23T05:33:13Z Feb 28 03:36:12 crc kubenswrapper[4624]: W0228 03:36:12.526022 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:12Z is after 2026-02-23T05:33:13Z Feb 28 03:36:12 crc kubenswrapper[4624]: E0228 03:36:12.526176 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:12 crc kubenswrapper[4624]: I0228 03:36:12.992388 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:12Z is after 2026-02-23T05:33:13Z Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.007308 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.007425 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.007508 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.007814 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.009567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.009649 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.009670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.010658 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f00c6464ab3eee8445a5fc9ad8b4fe08696b0cb42e3fa14744de2cdd09f02009"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.011025 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://f00c6464ab3eee8445a5fc9ad8b4fe08696b0cb42e3fa14744de2cdd09f02009" gracePeriod=30 Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.270488 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.271074 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f00c6464ab3eee8445a5fc9ad8b4fe08696b0cb42e3fa14744de2cdd09f02009"} Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.271182 4624 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f00c6464ab3eee8445a5fc9ad8b4fe08696b0cb42e3fa14744de2cdd09f02009" exitCode=255 Feb 28 03:36:13 crc kubenswrapper[4624]: I0228 03:36:13.993406 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:13Z is after 2026-02-23T05:33:13Z Feb 28 03:36:13 crc kubenswrapper[4624]: E0228 03:36:13.997356 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:13Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.008974 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.010512 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.010549 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.010559 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.010588 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:14 crc kubenswrapper[4624]: E0228 03:36:14.014475 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:14Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.276258 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.276932 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bf3a7c9c875a2058e2a3e6970031374de2c02e6715fa7f390fb44cf4cff66e53"} Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.277076 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.277863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.277891 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.277901 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:14 crc kubenswrapper[4624]: I0228 03:36:14.993753 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:14Z is after 2026-02-23T05:33:13Z Feb 28 03:36:15 crc kubenswrapper[4624]: W0228 03:36:15.240764 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:15Z is after 2026-02-23T05:33:13Z Feb 28 03:36:15 crc kubenswrapper[4624]: E0228 03:36:15.240856 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:15 crc kubenswrapper[4624]: I0228 03:36:15.280528 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:15 crc kubenswrapper[4624]: I0228 03:36:15.281762 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:15 crc kubenswrapper[4624]: I0228 03:36:15.281866 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:15 crc kubenswrapper[4624]: I0228 03:36:15.281888 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:15 crc kubenswrapper[4624]: I0228 03:36:15.991596 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:15Z is after 2026-02-23T05:33:13Z Feb 28 03:36:16 crc kubenswrapper[4624]: E0228 03:36:16.159024 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:36:16 crc kubenswrapper[4624]: I0228 03:36:16.824627 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:36:16 crc kubenswrapper[4624]: I0228 03:36:16.824879 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:16 crc kubenswrapper[4624]: I0228 03:36:16.827006 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:16 crc kubenswrapper[4624]: I0228 03:36:16.827196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:16 crc kubenswrapper[4624]: I0228 03:36:16.827265 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:16 crc kubenswrapper[4624]: I0228 03:36:16.993893 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:16Z is after 2026-02-23T05:33:13Z Feb 28 03:36:17 crc kubenswrapper[4624]: W0228 03:36:17.156220 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:17Z is after 2026-02-23T05:33:13Z Feb 28 03:36:17 crc kubenswrapper[4624]: E0228 03:36:17.156326 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:17 crc kubenswrapper[4624]: I0228 03:36:17.991495 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:17Z is after 2026-02-23T05:33:13Z Feb 28 03:36:18 crc kubenswrapper[4624]: I0228 03:36:18.993701 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:18Z is after 2026-02-23T05:33:13Z Feb 28 03:36:19 crc kubenswrapper[4624]: I0228 03:36:19.993383 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:19Z is after 2026-02-23T05:33:13Z Feb 28 03:36:20 crc kubenswrapper[4624]: I0228 03:36:20.007281 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:36:20 crc kubenswrapper[4624]: I0228 03:36:20.007569 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:20 crc kubenswrapper[4624]: I0228 03:36:20.009304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:20 crc kubenswrapper[4624]: I0228 03:36:20.009373 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:20 crc kubenswrapper[4624]: I0228 03:36:20.009393 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:20 crc kubenswrapper[4624]: E0228 03:36:20.596919 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:20Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18984bc112bd6c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,LastTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:20 crc kubenswrapper[4624]: I0228 03:36:20.990946 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:20Z is after 2026-02-23T05:33:13Z Feb 28 03:36:21 crc kubenswrapper[4624]: E0228 03:36:21.001588 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:21Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 28 03:36:21 crc kubenswrapper[4624]: I0228 03:36:21.015007 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:21 crc kubenswrapper[4624]: I0228 03:36:21.016982 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:21 crc kubenswrapper[4624]: I0228 03:36:21.017041 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:21 crc kubenswrapper[4624]: I0228 03:36:21.017060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:21 crc kubenswrapper[4624]: I0228 03:36:21.017123 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:21 crc kubenswrapper[4624]: E0228 03:36:21.019754 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:21Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 03:36:21 crc kubenswrapper[4624]: I0228 03:36:21.993170 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:21Z is after 2026-02-23T05:33:13Z Feb 28 03:36:22 crc kubenswrapper[4624]: I0228 03:36:22.992331 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:22Z is after 2026-02-23T05:33:13Z Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.007988 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.008465 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.087006 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.088224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.088269 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.088281 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.088993 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.307873 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.309230 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7"} Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.309417 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.310173 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.310199 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.310211 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:23 crc kubenswrapper[4624]: I0228 03:36:23.993692 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:23Z is after 2026-02-23T05:33:13Z Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.314818 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.315593 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.318187 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" exitCode=255 Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.318266 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7"} Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.318379 4624 scope.go:117] "RemoveContainer" containerID="c1aabe36242b59e52b5d705d8d3500aecff6be5b3fecec617526d67711739243" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.318508 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.319291 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.319320 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.319351 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.319947 4624 scope.go:117] "RemoveContainer" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" Feb 28 03:36:24 crc kubenswrapper[4624]: E0228 03:36:24.320152 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.835440 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 03:36:24 crc kubenswrapper[4624]: E0228 03:36:24.842548 4624 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 03:36:24 crc kubenswrapper[4624]: E0228 03:36:24.843816 4624 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 28 03:36:24 crc kubenswrapper[4624]: I0228 03:36:24.992531 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T03:36:24Z is after 2026-02-23T05:33:13Z Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.122624 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.324914 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.327595 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.329273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.329330 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.329356 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.330558 4624 scope.go:117] "RemoveContainer" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" Feb 28 03:36:25 crc kubenswrapper[4624]: E0228 03:36:25.330998 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:25 crc kubenswrapper[4624]: W0228 03:36:25.521184 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:25 crc kubenswrapper[4624]: E0228 03:36:25.521258 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 03:36:25 crc kubenswrapper[4624]: I0228 03:36:25.996059 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:26 crc kubenswrapper[4624]: E0228 03:36:26.159208 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:36:26 crc kubenswrapper[4624]: I0228 03:36:26.996187 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:27 crc kubenswrapper[4624]: I0228 03:36:27.994288 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:28 crc kubenswrapper[4624]: E0228 03:36:28.009727 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 03:36:28 crc kubenswrapper[4624]: I0228 03:36:28.020961 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:28 crc kubenswrapper[4624]: I0228 03:36:28.022695 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:28 crc kubenswrapper[4624]: I0228 03:36:28.022757 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:28 crc kubenswrapper[4624]: I0228 03:36:28.022770 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:28 crc kubenswrapper[4624]: I0228 03:36:28.022809 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:28 crc kubenswrapper[4624]: E0228 03:36:28.028288 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 03:36:28 crc kubenswrapper[4624]: I0228 03:36:28.994811 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:29 crc kubenswrapper[4624]: W0228 03:36:29.073180 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 03:36:29 crc kubenswrapper[4624]: E0228 03:36:29.073259 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 03:36:29 crc kubenswrapper[4624]: I0228 03:36:29.994058 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.045755 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.046048 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.048020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.048097 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.048111 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.048926 4624 scope.go:117] "RemoveContainer" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.049221 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.605977 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc112bd6c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,LastTimestamp:2026-02-28 03:35:45.985465435 +0000 UTC m=+0.649504774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.613389 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.620026 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.627786 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.633809 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc11c564063 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.146476131 +0000 UTC m=+0.810515450,LastTimestamp:2026-02-28 03:35:46.146476131 +0000 UTC m=+0.810515450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.641241 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.188205471 +0000 UTC m=+0.852244800,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.648747 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.188234302 +0000 UTC m=+0.852273621,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.655153 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf4833\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.188247992 +0000 UTC m=+0.852287311,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.669042 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.189453174 +0000 UTC m=+0.853492493,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.676865 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.189475555 +0000 UTC m=+0.853514874,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.685675 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf4833\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.189487055 +0000 UTC m=+0.853526374,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.693892 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.189941711 +0000 UTC m=+0.853981030,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.701615 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.189960391 +0000 UTC m=+0.853999710,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.706611 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf4833\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.189970892 +0000 UTC m=+0.854010211,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.712051 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.190738659 +0000 UTC m=+0.854777978,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.718649 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.190755749 +0000 UTC m=+0.854795068,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.725644 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf4833\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.19076678 +0000 UTC m=+0.854806099,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.732818 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.192153957 +0000 UTC m=+0.856193266,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.737068 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.192182278 +0000 UTC m=+0.856221587,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.740958 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf4833\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.192193579 +0000 UTC m=+0.856232888,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.744241 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.192577762 +0000 UTC m=+0.856617081,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.748213 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.192598632 +0000 UTC m=+0.856637961,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.751977 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf4833\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf4833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052696115 +0000 UTC m=+0.716735434,LastTimestamp:2026-02-28 03:35:46.192699456 +0000 UTC m=+0.856738775,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.759159 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bebb56\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bebb56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052660054 +0000 UTC m=+0.716699373,LastTimestamp:2026-02-28 03:35:46.193413791 +0000 UTC m=+0.857453110,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.766778 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984bc116bf1894\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984bc116bf1894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.052683924 +0000 UTC m=+0.716723243,LastTimestamp:2026-02-28 03:35:46.193436782 +0000 UTC m=+0.857476111,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.774879 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc13688b2eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.585989867 +0000 UTC m=+1.250029216,LastTimestamp:2026-02-28 03:35:46.585989867 +0000 UTC m=+1.250029216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.779606 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984bc1369591ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.586833326 +0000 UTC m=+1.250872645,LastTimestamp:2026-02-28 03:35:46.586833326 +0000 UTC m=+1.250872645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.786199 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc136fe1d07 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.593684743 +0000 UTC m=+1.257724072,LastTimestamp:2026-02-28 03:35:46.593684743 +0000 UTC m=+1.257724072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.791374 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1379f053a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.604229946 +0000 UTC m=+1.268269265,LastTimestamp:2026-02-28 03:35:46.604229946 +0000 UTC m=+1.268269265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.796197 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc138131707 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:46.611836679 +0000 UTC m=+1.275875998,LastTimestamp:2026-02-28 03:35:46.611836679 +0000 UTC m=+1.275875998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.800632 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc15f96ff5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.274792795 +0000 UTC m=+1.938832114,LastTimestamp:2026-02-28 03:35:47.274792795 +0000 UTC m=+1.938832114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.805920 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc15f9bf6e2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.275118306 +0000 UTC m=+1.939157625,LastTimestamp:2026-02-28 03:35:47.275118306 +0000 UTC m=+1.939157625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.815021 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984bc15fad5035 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.276255285 +0000 UTC m=+1.940294594,LastTimestamp:2026-02-28 03:35:47.276255285 +0000 UTC m=+1.940294594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.822592 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc15fb0fea9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.276496553 +0000 UTC m=+1.940535862,LastTimestamp:2026-02-28 03:35:47.276496553 +0000 UTC m=+1.940535862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.830355 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc160035548 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.28189268 +0000 UTC m=+1.945931989,LastTimestamp:2026-02-28 03:35:47.28189268 +0000 UTC m=+1.945931989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.836704 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1608c9741 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.290888001 +0000 UTC m=+1.954927350,LastTimestamp:2026-02-28 03:35:47.290888001 +0000 UTC m=+1.954927350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.843305 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc16099295f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.291711839 +0000 UTC m=+1.955751198,LastTimestamp:2026-02-28 03:35:47.291711839 +0000 UTC m=+1.955751198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.848901 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984bc1611a3d52 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.30017109 +0000 UTC m=+1.964210409,LastTimestamp:2026-02-28 03:35:47.30017109 +0000 UTC m=+1.964210409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.854556 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc16127f002 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.301068802 +0000 UTC m=+1.965108121,LastTimestamp:2026-02-28 03:35:47.301068802 +0000 UTC m=+1.965108121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.861734 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc1613efdda openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.302579674 +0000 UTC m=+1.966618993,LastTimestamp:2026-02-28 03:35:47.302579674 +0000 UTC m=+1.966618993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.868987 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc16153e2fb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.303949051 +0000 UTC m=+1.967988400,LastTimestamp:2026-02-28 03:35:47.303949051 +0000 UTC m=+1.967988400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.875758 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc1709675f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.559970291 +0000 UTC m=+2.224009640,LastTimestamp:2026-02-28 03:35:47.559970291 +0000 UTC m=+2.224009640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.883597 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc1717f8eff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.575246591 +0000 UTC m=+2.239285940,LastTimestamp:2026-02-28 03:35:47.575246591 +0000 UTC m=+2.239285940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.889978 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc1719a1b78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.576986488 +0000 UTC m=+2.241025837,LastTimestamp:2026-02-28 03:35:47.576986488 +0000 UTC m=+2.241025837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.898140 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc17ffa1020 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.818156064 +0000 UTC m=+2.482195423,LastTimestamp:2026-02-28 03:35:47.818156064 +0000 UTC m=+2.482195423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.905317 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc180c8921e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.831689758 +0000 UTC m=+2.495729117,LastTimestamp:2026-02-28 03:35:47.831689758 +0000 UTC m=+2.495729117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.912134 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc180e43058 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.833499736 +0000 UTC m=+2.497539095,LastTimestamp:2026-02-28 03:35:47.833499736 +0000 UTC m=+2.497539095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.919473 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc18ea20e4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.064046667 +0000 UTC m=+2.728085996,LastTimestamp:2026-02-28 03:35:48.064046667 +0000 UTC m=+2.728085996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.924260 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc18f625344 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.076647236 +0000 UTC m=+2.740686555,LastTimestamp:2026-02-28 03:35:48.076647236 +0000 UTC m=+2.740686555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.929570 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc191914f53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.113280851 +0000 UTC m=+2.777320170,LastTimestamp:2026-02-28 03:35:48.113280851 +0000 UTC m=+2.777320170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.937601 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984bc191df2158 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.118380888 +0000 UTC m=+2.782420247,LastTimestamp:2026-02-28 03:35:48.118380888 +0000 UTC m=+2.782420247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.944383 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc191f50e1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.119817757 +0000 UTC m=+2.783857106,LastTimestamp:2026-02-28 03:35:48.119817757 +0000 UTC m=+2.783857106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.952779 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1924f20a9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.125720745 +0000 UTC m=+2.789760084,LastTimestamp:2026-02-28 03:35:48.125720745 +0000 UTC m=+2.789760084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.961232 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1a18cb2ad openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.381414061 +0000 UTC m=+3.045453370,LastTimestamp:2026-02-28 03:35:48.381414061 +0000 UTC m=+3.045453370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.969066 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc1a19f8d19 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.382649625 +0000 UTC m=+3.046688934,LastTimestamp:2026-02-28 03:35:48.382649625 +0000 UTC m=+3.046688934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.975453 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1a1a034fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.382692606 +0000 UTC m=+3.046731915,LastTimestamp:2026-02-28 03:35:48.382692606 +0000 UTC m=+3.046731915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.980953 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984bc1a1a0d329 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.382733097 +0000 UTC m=+3.046772406,LastTimestamp:2026-02-28 03:35:48.382733097 +0000 UTC m=+3.046772406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.987202 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1a23379ed openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.392344045 +0000 UTC m=+3.056383354,LastTimestamp:2026-02-28 03:35:48.392344045 +0000 UTC m=+3.056383354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: I0228 03:36:30.992550 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.993239 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1a24639f0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.393572848 +0000 UTC m=+3.057612157,LastTimestamp:2026-02-28 03:35:48.393572848 +0000 UTC m=+3.057612157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:30 crc kubenswrapper[4624]: E0228 03:36:30.994822 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984bc1a2d4d04a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.40291745 +0000 UTC m=+3.066956759,LastTimestamp:2026-02-28 03:35:48.40291745 +0000 UTC m=+3.066956759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.001195 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1a31a208e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.407459982 +0000 UTC m=+3.071499291,LastTimestamp:2026-02-28 03:35:48.407459982 +0000 UTC m=+3.071499291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.005966 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1a32b2fa0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.408577952 +0000 UTC m=+3.072617261,LastTimestamp:2026-02-28 03:35:48.408577952 +0000 UTC m=+3.072617261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.013759 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc1a40457a1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.422809505 +0000 UTC m=+3.086848814,LastTimestamp:2026-02-28 03:35:48.422809505 +0000 UTC m=+3.086848814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.018011 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1ae9946fe openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.60034227 +0000 UTC m=+3.264381579,LastTimestamp:2026-02-28 03:35:48.60034227 +0000 UTC m=+3.264381579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.025227 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1af749854 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.614715476 +0000 UTC m=+3.278754785,LastTimestamp:2026-02-28 03:35:48.614715476 +0000 UTC m=+3.278754785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.029851 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1afb25909 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.618762505 +0000 UTC m=+3.282801814,LastTimestamp:2026-02-28 03:35:48.618762505 +0000 UTC m=+3.282801814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.034538 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1afc1eb28 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.619782952 +0000 UTC m=+3.283822261,LastTimestamp:2026-02-28 03:35:48.619782952 +0000 UTC m=+3.283822261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.039693 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1b024e3da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.626269146 +0000 UTC m=+3.290308455,LastTimestamp:2026-02-28 03:35:48.626269146 +0000 UTC m=+3.290308455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.045811 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1b04f0ead openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.629032621 +0000 UTC m=+3.293071920,LastTimestamp:2026-02-28 03:35:48.629032621 +0000 UTC m=+3.293071920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.053653 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1ba4f6ebd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.796829373 +0000 UTC m=+3.460868682,LastTimestamp:2026-02-28 03:35:48.796829373 +0000 UTC m=+3.460868682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.060518 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1ba65eacb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.798302923 +0000 UTC m=+3.462342232,LastTimestamp:2026-02-28 03:35:48.798302923 +0000 UTC m=+3.462342232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.066566 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984bc1bb6c05dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.815480285 +0000 UTC m=+3.479519594,LastTimestamp:2026-02-28 03:35:48.815480285 +0000 UTC m=+3.479519594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.071179 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1bb8a7b43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.817476419 +0000 UTC m=+3.481515738,LastTimestamp:2026-02-28 03:35:48.817476419 +0000 UTC m=+3.481515738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.076181 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1bbd30806 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.822231046 +0000 UTC m=+3.486270365,LastTimestamp:2026-02-28 03:35:48.822231046 +0000 UTC m=+3.486270365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.081256 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1c58b3acc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.985297612 +0000 UTC m=+3.649336921,LastTimestamp:2026-02-28 03:35:48.985297612 +0000 UTC m=+3.649336921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.087882 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1c63bbbd7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.996864983 +0000 UTC m=+3.660904292,LastTimestamp:2026-02-28 03:35:48.996864983 +0000 UTC m=+3.660904292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.092555 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1c652c51c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.998374684 +0000 UTC m=+3.662413993,LastTimestamp:2026-02-28 03:35:48.998374684 +0000 UTC m=+3.662413993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.098664 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc1ceef41b9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.142847929 +0000 UTC m=+3.806887238,LastTimestamp:2026-02-28 03:35:49.142847929 +0000 UTC m=+3.806887238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.104681 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1d05cf4a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.166814373 +0000 UTC m=+3.830853692,LastTimestamp:2026-02-28 03:35:49.166814373 +0000 UTC m=+3.830853692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.112224 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1d132c604 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.18082714 +0000 UTC m=+3.844866449,LastTimestamp:2026-02-28 03:35:49.18082714 +0000 UTC m=+3.844866449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.119567 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc1d9a4fdfe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.322530302 +0000 UTC m=+3.986569611,LastTimestamp:2026-02-28 03:35:49.322530302 +0000 UTC m=+3.986569611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.126567 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc1da71206d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.335908461 +0000 UTC m=+3.999947770,LastTimestamp:2026-02-28 03:35:49.335908461 +0000 UTC m=+3.999947770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.132287 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc20be758e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.165739748 +0000 UTC m=+4.829779087,LastTimestamp:2026-02-28 03:35:50.165739748 +0000 UTC m=+4.829779087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.137118 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc21ae8a82b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.417483819 +0000 UTC m=+5.081523138,LastTimestamp:2026-02-28 03:35:50.417483819 +0000 UTC m=+5.081523138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.144746 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc21bc49c98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.431898776 +0000 UTC m=+5.095938095,LastTimestamp:2026-02-28 03:35:50.431898776 +0000 UTC m=+5.095938095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.149145 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc21be378c4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.43392122 +0000 UTC m=+5.097960549,LastTimestamp:2026-02-28 03:35:50.43392122 +0000 UTC m=+5.097960549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.153947 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc22b1b78b1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.689249457 +0000 UTC m=+5.353288816,LastTimestamp:2026-02-28 03:35:50.689249457 +0000 UTC m=+5.353288816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.159695 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc22c28d329 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.706901801 +0000 UTC m=+5.370941140,LastTimestamp:2026-02-28 03:35:50.706901801 +0000 UTC m=+5.370941140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.166840 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc22c49bdfe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.70905907 +0000 UTC m=+5.373098409,LastTimestamp:2026-02-28 03:35:50.70905907 +0000 UTC m=+5.373098409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.173267 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc23a513e5e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.94443171 +0000 UTC m=+5.608471019,LastTimestamp:2026-02-28 03:35:50.94443171 +0000 UTC m=+5.608471019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.178430 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc23b534076 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.961340534 +0000 UTC m=+5.625379843,LastTimestamp:2026-02-28 03:35:50.961340534 +0000 UTC m=+5.625379843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.185903 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc23b681039 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:50.962704441 +0000 UTC m=+5.626743750,LastTimestamp:2026-02-28 03:35:50.962704441 +0000 UTC m=+5.626743750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.192672 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc24951ca67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:51.196125799 +0000 UTC m=+5.860165118,LastTimestamp:2026-02-28 03:35:51.196125799 +0000 UTC m=+5.860165118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.197608 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc24a0d089f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:51.208396959 +0000 UTC m=+5.872436268,LastTimestamp:2026-02-28 03:35:51.208396959 +0000 UTC m=+5.872436268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.202051 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc24a2043e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:51.209657312 +0000 UTC m=+5.873696621,LastTimestamp:2026-02-28 03:35:51.209657312 +0000 UTC m=+5.873696621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.207257 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc255b638de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:51.40403427 +0000 UTC m=+6.068073579,LastTimestamp:2026-02-28 03:35:51.40403427 +0000 UTC m=+6.068073579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.214294 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984bc2564890a6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:51.413624998 +0000 UTC m=+6.077664327,LastTimestamp:2026-02-28 03:35:51.413624998 +0000 UTC m=+6.077664327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.220034 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 03:36:31 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-controller-manager-crc.18984bc2b5470c7c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 28 03:36:31 crc kubenswrapper[4624]: body: Feb 28 03:36:31 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:53.007361148 +0000 UTC m=+7.671400497,LastTimestamp:2026-02-28 03:35:53.007361148 +0000 UTC m=+7.671400497,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:31 crc kubenswrapper[4624]: > Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.225668 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc2b548585e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:53.00744611 +0000 UTC m=+7.671485459,LastTimestamp:2026-02-28 03:35:53.00744611 +0000 UTC m=+7.671485459,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.242565 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 03:36:31 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-apiserver-crc.18984bc458d996b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 28 03:36:31 crc kubenswrapper[4624]: body: Feb 28 03:36:31 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:00.046618297 +0000 UTC m=+14.710657636,LastTimestamp:2026-02-28 03:36:00.046618297 +0000 UTC m=+14.710657636,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:31 crc kubenswrapper[4624]: > Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.250695 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc458daeca6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:00.04670583 +0000 UTC m=+14.710745179,LastTimestamp:2026-02-28 03:36:00.04670583 +0000 UTC m=+14.710745179,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.259222 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984bc1c652c51c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1c652c51c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:48.998374684 +0000 UTC m=+3.662413993,LastTimestamp:2026-02-28 03:36:00.212131138 +0000 UTC m=+14.876170447,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.265489 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984bc1d05cf4a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1d05cf4a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.166814373 +0000 UTC m=+3.830853692,LastTimestamp:2026-02-28 03:36:00.510467551 +0000 UTC m=+15.174506860,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.271346 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984bc1d132c604\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc1d132c604 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:49.18082714 +0000 UTC m=+3.844866449,LastTimestamp:2026-02-28 03:36:00.52527304 +0000 UTC m=+15.189312349,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.275888 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 03:36:31 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-apiserver-crc.18984bc47942dc57 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 28 03:36:31 crc kubenswrapper[4624]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 03:36:31 crc kubenswrapper[4624]: Feb 28 03:36:31 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:00.590388311 +0000 UTC m=+15.254427620,LastTimestamp:2026-02-28 03:36:00.590388311 +0000 UTC m=+15.254427620,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:31 crc kubenswrapper[4624]: > Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.279735 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984bc4794375b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:00.590427572 +0000 UTC m=+15.254466881,LastTimestamp:2026-02-28 03:36:00.590427572 +0000 UTC m=+15.254466881,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.285975 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 03:36:31 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-controller-manager-crc.18984bc509594b11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 03:36:31 crc kubenswrapper[4624]: body: Feb 28 03:36:31 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007777553 +0000 UTC m=+17.671816902,LastTimestamp:2026-02-28 03:36:03.007777553 +0000 UTC m=+17.671816902,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:31 crc kubenswrapper[4624]: > Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.293015 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc5095c4ba4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007974308 +0000 UTC m=+17.672013647,LastTimestamp:2026-02-28 03:36:03.007974308 +0000 UTC m=+17.672013647,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.303770 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc509594b11\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 03:36:31 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-controller-manager-crc.18984bc509594b11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 03:36:31 crc kubenswrapper[4624]: body: Feb 28 03:36:31 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007777553 +0000 UTC m=+17.671816902,LastTimestamp:2026-02-28 03:36:13.007385946 +0000 UTC m=+27.671425295,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:31 crc kubenswrapper[4624]: > Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.310428 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc5095c4ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc5095c4ba4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007974308 +0000 UTC m=+17.672013647,LastTimestamp:2026-02-28 03:36:13.007470258 +0000 UTC m=+27.671509577,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.317338 4624 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc75d964957 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:13.010995543 +0000 UTC m=+27.675034912,LastTimestamp:2026-02-28 03:36:13.010995543 +0000 UTC m=+27.675034912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.322587 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc16153e2fb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc16153e2fb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.303949051 +0000 UTC m=+1.967988400,LastTimestamp:2026-02-28 03:36:13.127511406 +0000 UTC m=+27.791550725,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.328223 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc1709675f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc1709675f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.559970291 +0000 UTC m=+2.224009640,LastTimestamp:2026-02-28 03:36:13.297877868 +0000 UTC m=+27.961917167,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.334131 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc1717f8eff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc1717f8eff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:35:47.575246591 +0000 UTC m=+2.239285940,LastTimestamp:2026-02-28 03:36:13.307114676 +0000 UTC m=+27.971153985,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.341188 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc509594b11\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 03:36:31 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-controller-manager-crc.18984bc509594b11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 03:36:31 crc kubenswrapper[4624]: body: Feb 28 03:36:31 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007777553 +0000 UTC m=+17.671816902,LastTimestamp:2026-02-28 03:36:23.008422448 +0000 UTC m=+37.672461757,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:31 crc kubenswrapper[4624]: > Feb 28 03:36:31 crc kubenswrapper[4624]: E0228 03:36:31.351032 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc5095c4ba4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984bc5095c4ba4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007974308 +0000 UTC m=+17.672013647,LastTimestamp:2026-02-28 03:36:23.008567783 +0000 UTC m=+37.672607092,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:36:31 crc kubenswrapper[4624]: I0228 03:36:31.993406 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:32 crc kubenswrapper[4624]: I0228 03:36:32.993355 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:33 crc kubenswrapper[4624]: I0228 03:36:33.007978 4624 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:36:33 crc kubenswrapper[4624]: I0228 03:36:33.008112 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:36:33 crc kubenswrapper[4624]: E0228 03:36:33.013568 4624 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984bc509594b11\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 03:36:33 crc kubenswrapper[4624]: &Event{ObjectMeta:{kube-controller-manager-crc.18984bc509594b11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 03:36:33 crc kubenswrapper[4624]: body: Feb 28 03:36:33 crc kubenswrapper[4624]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:36:03.007777553 +0000 UTC m=+17.671816902,LastTimestamp:2026-02-28 03:36:33.008061203 +0000 UTC m=+47.672100512,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 03:36:33 crc kubenswrapper[4624]: > Feb 28 03:36:33 crc kubenswrapper[4624]: W0228 03:36:33.068773 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 28 03:36:33 crc kubenswrapper[4624]: E0228 03:36:33.068885 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 03:36:33 crc kubenswrapper[4624]: W0228 03:36:33.482439 4624 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 03:36:33 crc kubenswrapper[4624]: E0228 03:36:33.482525 4624 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 03:36:33 crc kubenswrapper[4624]: I0228 03:36:33.995374 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:34 crc kubenswrapper[4624]: I0228 03:36:34.997326 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:35 crc kubenswrapper[4624]: E0228 03:36:35.018247 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.028383 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.029801 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.029849 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.029868 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.029904 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:35 crc kubenswrapper[4624]: E0228 03:36:35.035521 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.532242 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.532434 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.533615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.533672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.533703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:35 crc kubenswrapper[4624]: I0228 03:36:35.996467 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:36 crc kubenswrapper[4624]: E0228 03:36:36.159595 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:36:36 crc kubenswrapper[4624]: I0228 03:36:36.993317 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:37 crc kubenswrapper[4624]: I0228 03:36:37.993892 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:38 crc kubenswrapper[4624]: I0228 03:36:38.996193 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:39 crc kubenswrapper[4624]: I0228 03:36:39.996645 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:40 crc kubenswrapper[4624]: I0228 03:36:40.996824 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.057126 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.057394 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.059042 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.059112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.059121 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.065133 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.086536 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.088015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.088060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.088076 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.088879 4624 scope.go:117] "RemoveContainer" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" Feb 28 03:36:41 crc kubenswrapper[4624]: E0228 03:36:41.089149 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.377543 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.378423 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.378468 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.378478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:41 crc kubenswrapper[4624]: I0228 03:36:41.995057 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:42 crc kubenswrapper[4624]: E0228 03:36:42.026178 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 03:36:42 crc kubenswrapper[4624]: I0228 03:36:42.036651 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:42 crc kubenswrapper[4624]: I0228 03:36:42.038054 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:42 crc kubenswrapper[4624]: I0228 03:36:42.038139 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:42 crc kubenswrapper[4624]: I0228 03:36:42.038159 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:42 crc kubenswrapper[4624]: I0228 03:36:42.038198 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:42 crc kubenswrapper[4624]: E0228 03:36:42.045712 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 03:36:42 crc kubenswrapper[4624]: I0228 03:36:42.995846 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:43 crc kubenswrapper[4624]: I0228 03:36:43.992763 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:44 crc kubenswrapper[4624]: I0228 03:36:44.996300 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:45 crc kubenswrapper[4624]: I0228 03:36:45.994767 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:46 crc kubenswrapper[4624]: E0228 03:36:46.159705 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:36:46 crc kubenswrapper[4624]: I0228 03:36:46.992614 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:47 crc kubenswrapper[4624]: I0228 03:36:47.994122 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:48 crc kubenswrapper[4624]: I0228 03:36:48.993039 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:49 crc kubenswrapper[4624]: E0228 03:36:49.031645 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 03:36:49 crc kubenswrapper[4624]: I0228 03:36:49.046672 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:49 crc kubenswrapper[4624]: I0228 03:36:49.048277 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:49 crc kubenswrapper[4624]: I0228 03:36:49.048333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:49 crc kubenswrapper[4624]: I0228 03:36:49.048346 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:49 crc kubenswrapper[4624]: I0228 03:36:49.048383 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:49 crc kubenswrapper[4624]: E0228 03:36:49.053724 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 03:36:49 crc kubenswrapper[4624]: I0228 03:36:49.992763 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:50 crc kubenswrapper[4624]: I0228 03:36:50.993412 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:51 crc kubenswrapper[4624]: I0228 03:36:51.994779 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:52 crc kubenswrapper[4624]: I0228 03:36:52.993472 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.086679 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.088024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.088120 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.088143 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.089234 4624 scope.go:117] "RemoveContainer" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.410283 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.412097 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a"} Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.412290 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.413191 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.413234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.413249 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:53 crc kubenswrapper[4624]: I0228 03:36:53.994669 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.417401 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.418280 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.420409 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" exitCode=255 Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.420464 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a"} Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.420559 4624 scope.go:117] "RemoveContainer" containerID="c50f5a6a3fcde6d74ae32e38e8e6576d0d133bb15c5e343527f0d6065d825fb7" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.420717 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.421524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.421639 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.421752 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.422777 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:36:54 crc kubenswrapper[4624]: E0228 03:36:54.423026 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:54 crc kubenswrapper[4624]: I0228 03:36:54.993588 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.122469 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.424577 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.427692 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.429387 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.429476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.429505 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.430586 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:36:55 crc kubenswrapper[4624]: E0228 03:36:55.430922 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:36:55 crc kubenswrapper[4624]: I0228 03:36:55.995953 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:56 crc kubenswrapper[4624]: E0228 03:36:56.040354 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.054527 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.055928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.055962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.055972 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.056003 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:36:56 crc kubenswrapper[4624]: E0228 03:36:56.060621 4624 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 03:36:56 crc kubenswrapper[4624]: E0228 03:36:56.160136 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.845738 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.868712 4624 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 03:36:56 crc kubenswrapper[4624]: I0228 03:36:56.993464 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:57 crc kubenswrapper[4624]: I0228 03:36:57.995575 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:58 crc kubenswrapper[4624]: I0228 03:36:58.995142 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:36:59 crc kubenswrapper[4624]: I0228 03:36:59.993180 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.045150 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.045367 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.046667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.046731 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.046746 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.047520 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:37:00 crc kubenswrapper[4624]: E0228 03:37:00.047718 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:37:00 crc kubenswrapper[4624]: I0228 03:37:00.992838 4624 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 03:37:01 crc kubenswrapper[4624]: I0228 03:37:01.058750 4624 csr.go:261] certificate signing request csr-6cqn5 is approved, waiting to be issued Feb 28 03:37:01 crc kubenswrapper[4624]: I0228 03:37:01.067766 4624 csr.go:257] certificate signing request csr-6cqn5 is issued Feb 28 03:37:01 crc kubenswrapper[4624]: I0228 03:37:01.077119 4624 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 28 03:37:01 crc kubenswrapper[4624]: I0228 03:37:01.822931 4624 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 28 03:37:02 crc kubenswrapper[4624]: I0228 03:37:02.069428 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-20 07:47:48.572520894 +0000 UTC Feb 28 03:37:02 crc kubenswrapper[4624]: I0228 03:37:02.069491 4624 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6364h10m46.503035463s for next certificate rotation Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.061366 4624 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.063376 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.063434 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.063447 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.063608 4624 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.083793 4624 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.084338 4624 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.084378 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.090430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.090486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.090509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.090543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.090564 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:03Z","lastTransitionTime":"2026-02-28T03:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.112788 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.124320 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.124377 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.124403 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.124436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.124463 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:03Z","lastTransitionTime":"2026-02-28T03:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.140634 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.153430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.153487 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.153505 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.153535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.153558 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:03Z","lastTransitionTime":"2026-02-28T03:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.172506 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.186543 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.186624 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.186645 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.186676 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:03 crc kubenswrapper[4624]: I0228 03:37:03.186703 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:03Z","lastTransitionTime":"2026-02-28T03:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.207272 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.207499 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.207548 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.308230 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.408532 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.509370 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.609914 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.710869 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.811512 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:03 crc kubenswrapper[4624]: E0228 03:37:03.912206 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.013328 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.114324 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.214870 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.315975 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.416738 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.517612 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.619272 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.719653 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.820683 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:04 crc kubenswrapper[4624]: E0228 03:37:04.921676 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.022231 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.122608 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.223167 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.323747 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.424883 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.526137 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.626954 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.727200 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.827374 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:05 crc kubenswrapper[4624]: E0228 03:37:05.928280 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.028672 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.129141 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.161560 4624 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.230126 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.331316 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.431807 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.532832 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.633070 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.733694 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.834115 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:06 crc kubenswrapper[4624]: E0228 03:37:06.934577 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.035608 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.136794 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.237551 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.337862 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.438903 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.539478 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.640629 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.740980 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.841336 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:07 crc kubenswrapper[4624]: E0228 03:37:07.941825 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.042237 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.142932 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.243166 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.343537 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.444664 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.545274 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.646386 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: E0228 03:37:08.746943 4624 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.808557 4624 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.850541 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.850603 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.850624 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.850653 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.850673 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:08Z","lastTransitionTime":"2026-02-28T03:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.953911 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.953974 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.953993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.954020 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:08 crc kubenswrapper[4624]: I0228 03:37:08.954040 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:08Z","lastTransitionTime":"2026-02-28T03:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.012438 4624 apiserver.go:52] "Watching apiserver" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.020397 4624 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.020969 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.021752 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.021758 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.022526 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.022590 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.022526 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.022671 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.022543 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.022772 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.022195 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.024638 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026158 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026352 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026359 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026496 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026597 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026690 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026744 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.026910 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.057296 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.057370 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.057394 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.057426 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.057448 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.076585 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.093713 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.101355 4624 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.112155 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.114542 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.132515 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133219 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133356 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133414 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133467 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133503 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133536 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133572 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133607 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133640 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133674 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133685 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133711 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133785 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133822 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133814 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133860 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133898 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133934 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133973 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.133984 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134014 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134050 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134116 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134202 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134237 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134272 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134280 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134308 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134345 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134385 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134421 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134465 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134503 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134517 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134559 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134540 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134579 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134657 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134714 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134770 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134821 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134856 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134873 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134934 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134962 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.134991 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135043 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135163 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135225 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135280 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135295 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135329 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135434 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135482 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135535 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135578 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135586 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135636 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135689 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135744 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135757 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135792 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135900 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135884 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135946 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.135984 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136027 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136064 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136040 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136071 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136131 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136239 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136298 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136353 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136408 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136457 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136505 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136553 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136610 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136664 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136723 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136774 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136829 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136879 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136950 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137006 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137067 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137160 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137211 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137264 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137319 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137371 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137426 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137480 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137532 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137596 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137654 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137708 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137760 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137813 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137867 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137922 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137981 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138032 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138123 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138180 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138233 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138285 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138341 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138399 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138458 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138511 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138560 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138615 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138667 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138719 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138772 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138827 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138888 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138943 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138997 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139050 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139328 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139733 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139805 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139885 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139942 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140002 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140061 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140153 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140208 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140259 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140314 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140366 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140424 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140477 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140534 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140586 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140638 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140691 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140741 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140793 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140846 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140899 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140954 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141009 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141069 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141223 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141279 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141340 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141394 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141445 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141498 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141556 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141609 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141665 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141719 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141779 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141841 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141894 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141948 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142002 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142058 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142163 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142284 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142339 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142494 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142566 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142624 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142678 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142736 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142790 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142845 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142897 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142954 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143022 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143125 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143184 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143237 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143294 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143353 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143414 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143468 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143527 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143580 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143634 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143691 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143752 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143806 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143890 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143949 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144005 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144060 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144168 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144230 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144289 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144356 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144571 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144653 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144724 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144784 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144840 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144904 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144959 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145013 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145174 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145262 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145328 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145393 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145464 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145532 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145616 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145693 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145763 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145831 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145890 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145951 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.146011 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147649 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147790 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147972 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147990 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148006 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148024 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148040 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148054 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148067 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148118 4624 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148132 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148145 4624 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148160 4624 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148175 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148191 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148206 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148221 4624 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136300 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.156042 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136517 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.136756 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137208 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137392 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137587 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137600 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.137906 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138446 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138561 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138329 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138823 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.138974 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139170 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.139924 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140302 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140425 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140464 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140587 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140819 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.140475 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141164 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141263 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141476 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141465 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141600 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141714 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141705 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.141912 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142220 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142415 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142517 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.142912 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143152 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143398 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143526 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.143716 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144061 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144307 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144439 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144758 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.144962 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145075 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145155 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145480 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145966 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.145983 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.146676 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.146845 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147417 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147606 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148021 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.147993 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148208 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148242 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.148302 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148458 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148485 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148642 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148681 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.148983 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.149323 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.149404 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.149443 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.149555 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.150022 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.150243 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.151032 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.151363 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.151500 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.152616 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.152686 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.153127 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.153263 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.153282 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.153583 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.153585 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.153953 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.154702 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.155375 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.155429 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.155993 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.156396 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.156671 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.156910 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.157130 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.158012 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.158028 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.158518 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.158545 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.158647 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.159802 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.160718 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.161184 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.161310 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.161360 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.162365 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.162436 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.162869 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.163489 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164000 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164205 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164506 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164574 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164587 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164655 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.164929 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165061 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165064 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165173 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165366 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165510 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165634 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165566 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.165898 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.166245 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.166698 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.166811 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.167208 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.167726 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.167974 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.167992 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.168024 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:09.667948043 +0000 UTC m=+84.331987392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.168792 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.169186 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.171235 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.169199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.169264 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.169690 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.169827 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.170012 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.170030 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.170448 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.170471 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.170674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.170698 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.171936 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.172022 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.172352 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.172432 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:09.672407494 +0000 UTC m=+84.336446993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.173202 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.173512 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.174007 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.174387 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.174735 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.174797 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.175680 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.176420 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.177409 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.174711 4624 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.179677 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.180555 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.181110 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.188728 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.175784 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.190774 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.190852 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.190896 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.191044 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.191219 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.191269 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.189728 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.189815 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.190148 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.192352 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.192430 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.192565 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:09.692544351 +0000 UTC m=+84.356583670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.190628 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.190834 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.191060 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.191179 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.191573 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.192202 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.194689 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.195041 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.195352 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.195382 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.195430 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.195443 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.195567 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:09.695537262 +0000 UTC m=+84.359576791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.196015 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:09.695995704 +0000 UTC m=+84.360035023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.196173 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.196614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.197130 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.197360 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.197498 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.197802 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.197870 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.198631 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.204685 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.204932 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.204953 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.205155 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.205240 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.205506 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.206944 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.207239 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.209553 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.209682 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.210353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.210837 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.212125 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.214202 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.214295 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.218335 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.219775 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.226796 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249561 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249620 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249683 4624 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249747 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249763 4624 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249790 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249803 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249816 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249828 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249838 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249846 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249856 4624 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249897 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249906 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249915 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249925 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249935 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249946 4624 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249957 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249969 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249982 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.249992 4624 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250001 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250010 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250020 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250029 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250038 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250047 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250057 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250069 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250098 4624 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250108 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250118 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250128 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250141 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250150 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250160 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250169 4624 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250179 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250188 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250197 4624 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250206 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250214 4624 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250224 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250232 4624 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250241 4624 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250250 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250259 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250267 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250276 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250284 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250296 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250304 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250313 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250322 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250330 4624 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250340 4624 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250349 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250358 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250368 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250379 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250388 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250397 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250406 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250417 4624 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250426 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250436 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250446 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250454 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250464 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250474 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250484 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250493 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250502 4624 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250511 4624 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250520 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250529 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250538 4624 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250546 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250555 4624 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250565 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250573 4624 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250582 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250591 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250600 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250608 4624 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250617 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250627 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250636 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250645 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250655 4624 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250665 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250674 4624 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250683 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250693 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250701 4624 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250714 4624 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250723 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250733 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250742 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250751 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250762 4624 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250771 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250780 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250789 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250798 4624 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250807 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250818 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250827 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250836 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250844 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250853 4624 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250862 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250870 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250879 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250889 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250898 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250906 4624 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250915 4624 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250923 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250932 4624 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250941 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250949 4624 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250957 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250966 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250975 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250983 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.250991 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251003 4624 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251012 4624 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251021 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251030 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251039 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251048 4624 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251059 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251068 4624 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251099 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251113 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251125 4624 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251136 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251148 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251157 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251166 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251175 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251184 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251193 4624 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251203 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251213 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251221 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251231 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251242 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251251 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251261 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251270 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251280 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251289 4624 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251299 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251308 4624 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251317 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251327 4624 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251337 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251347 4624 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251357 4624 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251367 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251377 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251388 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251398 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251407 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251416 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251426 4624 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251435 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251445 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251455 4624 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251464 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251473 4624 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251483 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251492 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251505 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251514 4624 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251523 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.251532 4624 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.294649 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.294707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.294720 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.294742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.294757 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.347135 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.362263 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 03:37:09 crc kubenswrapper[4624]: W0228 03:37:09.364752 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c5814cb0ed830c06bb7cfd59b8d2c0699162c671db3e6f1f960bbbd246941304 WatchSource:0}: Error finding container c5814cb0ed830c06bb7cfd59b8d2c0699162c671db3e6f1f960bbbd246941304: Status 404 returned error can't find the container with id c5814cb0ed830c06bb7cfd59b8d2c0699162c671db3e6f1f960bbbd246941304 Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.367774 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 03:37:09 crc kubenswrapper[4624]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 03:37:09 crc kubenswrapper[4624]: set -o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 03:37:09 crc kubenswrapper[4624]: source /etc/kubernetes/apiserver-url.env Feb 28 03:37:09 crc kubenswrapper[4624]: else Feb 28 03:37:09 crc kubenswrapper[4624]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 03:37:09 crc kubenswrapper[4624]: exit 1 Feb 28 03:37:09 crc kubenswrapper[4624]: fi Feb 28 03:37:09 crc kubenswrapper[4624]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 03:37:09 crc kubenswrapper[4624]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 03:37:09 crc kubenswrapper[4624]: > logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.368868 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.372439 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 03:37:09 crc kubenswrapper[4624]: W0228 03:37:09.373312 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5b9ef37aa8d9021bb55ed71f532c35977661719c6107bc5969a83b7527814a54 WatchSource:0}: Error finding container 5b9ef37aa8d9021bb55ed71f532c35977661719c6107bc5969a83b7527814a54: Status 404 returned error can't find the container with id 5b9ef37aa8d9021bb55ed71f532c35977661719c6107bc5969a83b7527814a54 Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.375297 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 03:37:09 crc kubenswrapper[4624]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 03:37:09 crc kubenswrapper[4624]: if [[ -f "/env/_master" ]]; then Feb 28 03:37:09 crc kubenswrapper[4624]: set -o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: source "/env/_master" Feb 28 03:37:09 crc kubenswrapper[4624]: set +o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: fi Feb 28 03:37:09 crc kubenswrapper[4624]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 28 03:37:09 crc kubenswrapper[4624]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 28 03:37:09 crc kubenswrapper[4624]: ho_enable="--enable-hybrid-overlay" Feb 28 03:37:09 crc kubenswrapper[4624]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 28 03:37:09 crc kubenswrapper[4624]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 28 03:37:09 crc kubenswrapper[4624]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 28 03:37:09 crc kubenswrapper[4624]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 03:37:09 crc kubenswrapper[4624]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --webhook-host=127.0.0.1 \ Feb 28 03:37:09 crc kubenswrapper[4624]: --webhook-port=9743 \ Feb 28 03:37:09 crc kubenswrapper[4624]: ${ho_enable} \ Feb 28 03:37:09 crc kubenswrapper[4624]: --enable-interconnect \ Feb 28 03:37:09 crc kubenswrapper[4624]: --disable-approver \ Feb 28 03:37:09 crc kubenswrapper[4624]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --wait-for-kubernetes-api=200s \ Feb 28 03:37:09 crc kubenswrapper[4624]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --loglevel="${LOGLEVEL}" Feb 28 03:37:09 crc kubenswrapper[4624]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 03:37:09 crc kubenswrapper[4624]: > logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.382740 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 03:37:09 crc kubenswrapper[4624]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 03:37:09 crc kubenswrapper[4624]: if [[ -f "/env/_master" ]]; then Feb 28 03:37:09 crc kubenswrapper[4624]: set -o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: source "/env/_master" Feb 28 03:37:09 crc kubenswrapper[4624]: set +o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: fi Feb 28 03:37:09 crc kubenswrapper[4624]: Feb 28 03:37:09 crc kubenswrapper[4624]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 28 03:37:09 crc kubenswrapper[4624]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 03:37:09 crc kubenswrapper[4624]: --disable-webhook \ Feb 28 03:37:09 crc kubenswrapper[4624]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --loglevel="${LOGLEVEL}" Feb 28 03:37:09 crc kubenswrapper[4624]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 03:37:09 crc kubenswrapper[4624]: > logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.383944 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 28 03:37:09 crc kubenswrapper[4624]: W0228 03:37:09.392003 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-cb9383a420798fad33d6079e00dcf5c88b9d05114b31f2e605ff5ef88a88ccdd WatchSource:0}: Error finding container cb9383a420798fad33d6079e00dcf5c88b9d05114b31f2e605ff5ef88a88ccdd: Status 404 returned error can't find the container with id cb9383a420798fad33d6079e00dcf5c88b9d05114b31f2e605ff5ef88a88ccdd Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.396465 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.396777 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.396811 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.396822 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.396844 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.396859 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.398126 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.471503 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b9ef37aa8d9021bb55ed71f532c35977661719c6107bc5969a83b7527814a54"} Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.473014 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 03:37:09 crc kubenswrapper[4624]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 03:37:09 crc kubenswrapper[4624]: if [[ -f "/env/_master" ]]; then Feb 28 03:37:09 crc kubenswrapper[4624]: set -o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: source "/env/_master" Feb 28 03:37:09 crc kubenswrapper[4624]: set +o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: fi Feb 28 03:37:09 crc kubenswrapper[4624]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 28 03:37:09 crc kubenswrapper[4624]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 28 03:37:09 crc kubenswrapper[4624]: ho_enable="--enable-hybrid-overlay" Feb 28 03:37:09 crc kubenswrapper[4624]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 28 03:37:09 crc kubenswrapper[4624]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 28 03:37:09 crc kubenswrapper[4624]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 28 03:37:09 crc kubenswrapper[4624]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 03:37:09 crc kubenswrapper[4624]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --webhook-host=127.0.0.1 \ Feb 28 03:37:09 crc kubenswrapper[4624]: --webhook-port=9743 \ Feb 28 03:37:09 crc kubenswrapper[4624]: ${ho_enable} \ Feb 28 03:37:09 crc kubenswrapper[4624]: --enable-interconnect \ Feb 28 03:37:09 crc kubenswrapper[4624]: --disable-approver \ Feb 28 03:37:09 crc kubenswrapper[4624]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --wait-for-kubernetes-api=200s \ Feb 28 03:37:09 crc kubenswrapper[4624]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --loglevel="${LOGLEVEL}" Feb 28 03:37:09 crc kubenswrapper[4624]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 03:37:09 crc kubenswrapper[4624]: > logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.473418 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c5814cb0ed830c06bb7cfd59b8d2c0699162c671db3e6f1f960bbbd246941304"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.474955 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cb9383a420798fad33d6079e00dcf5c88b9d05114b31f2e605ff5ef88a88ccdd"} Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.475046 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 03:37:09 crc kubenswrapper[4624]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 03:37:09 crc kubenswrapper[4624]: if [[ -f "/env/_master" ]]; then Feb 28 03:37:09 crc kubenswrapper[4624]: set -o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: source "/env/_master" Feb 28 03:37:09 crc kubenswrapper[4624]: set +o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: fi Feb 28 03:37:09 crc kubenswrapper[4624]: Feb 28 03:37:09 crc kubenswrapper[4624]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 28 03:37:09 crc kubenswrapper[4624]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 03:37:09 crc kubenswrapper[4624]: --disable-webhook \ Feb 28 03:37:09 crc kubenswrapper[4624]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 28 03:37:09 crc kubenswrapper[4624]: --loglevel="${LOGLEVEL}" Feb 28 03:37:09 crc kubenswrapper[4624]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 03:37:09 crc kubenswrapper[4624]: > logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.475856 4624 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 03:37:09 crc kubenswrapper[4624]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 03:37:09 crc kubenswrapper[4624]: set -o allexport Feb 28 03:37:09 crc kubenswrapper[4624]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 03:37:09 crc kubenswrapper[4624]: source /etc/kubernetes/apiserver-url.env Feb 28 03:37:09 crc kubenswrapper[4624]: else Feb 28 03:37:09 crc kubenswrapper[4624]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 03:37:09 crc kubenswrapper[4624]: exit 1 Feb 28 03:37:09 crc kubenswrapper[4624]: fi Feb 28 03:37:09 crc kubenswrapper[4624]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 03:37:09 crc kubenswrapper[4624]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 03:37:09 crc kubenswrapper[4624]: > logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.476132 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.476659 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.477751 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.477789 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.492831 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.499993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.500156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.500181 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.500216 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.500239 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.505850 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.516613 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.527545 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.537951 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.553015 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.563916 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.574069 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.594709 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.603817 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.603863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.603872 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.603891 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.603902 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.605365 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.618590 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.632233 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.644139 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.661000 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.706757 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.706785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.706793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.706809 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.706835 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.756138 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.756242 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.756279 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756303 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:10.756279978 +0000 UTC m=+85.420319287 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.756331 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.756386 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756433 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756452 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756466 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756506 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:10.756495315 +0000 UTC m=+85.420534624 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756526 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756564 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756577 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756614 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:10.756603428 +0000 UTC m=+85.420642737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756682 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756736 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:10.756726641 +0000 UTC m=+85.420765950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756827 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: E0228 03:37:09.756896 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:10.756886075 +0000 UTC m=+85.420925384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.809451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.809487 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.809499 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.809518 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.809531 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.912010 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.912052 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.912061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.912073 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:09 crc kubenswrapper[4624]: I0228 03:37:09.912123 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:09Z","lastTransitionTime":"2026-02-28T03:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.014964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.015032 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.015051 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.015110 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.015131 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.097443 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.098666 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.101351 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.102227 4624 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.102843 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.105232 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.106337 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.107586 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.109549 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.110813 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.112750 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.113740 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.115839 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.117064 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.119202 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.119608 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.119690 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.119711 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.119743 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.119765 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.121462 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.122755 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.124554 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.124986 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.125644 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.126760 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.127649 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.128672 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.129165 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.130259 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.130702 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.131389 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.132577 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.133133 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.134150 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.134669 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.135661 4624 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.135787 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.137493 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.138417 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.138843 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.140369 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.141053 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.141962 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.142727 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.143981 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.144965 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.146011 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.146726 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.147762 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.148321 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.149288 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.149839 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.150967 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.151524 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.152527 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.153237 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.154360 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.155143 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.155795 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.223430 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.223735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.223835 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.223938 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.224014 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.326484 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.326538 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.326558 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.326581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.326598 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.429723 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.429782 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.429800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.429825 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.429843 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.533149 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.533226 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.533247 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.533276 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.533297 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.636018 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.636165 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.636177 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.636192 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.636201 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.739389 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.739468 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.739496 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.739528 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.739555 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.765639 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.765735 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.765787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.765870 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:12.765841196 +0000 UTC m=+87.429880535 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.765923 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.765946 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.765969 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.765987 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766029 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766038 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:12.766024152 +0000 UTC m=+87.430063481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.765965 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766076 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:12.766063603 +0000 UTC m=+87.430102932 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766142 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766152 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766169 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766303 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766348 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:12.766301599 +0000 UTC m=+87.430341048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:10 crc kubenswrapper[4624]: E0228 03:37:10.766392 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:12.766372111 +0000 UTC m=+87.430411640 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.842262 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.842308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.842321 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.842335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.842343 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.945339 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.945387 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.945421 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.945445 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:10 crc kubenswrapper[4624]: I0228 03:37:10.945459 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:10Z","lastTransitionTime":"2026-02-28T03:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.049244 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.049299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.049313 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.049337 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.049351 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.086323 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.086340 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.086368 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:11 crc kubenswrapper[4624]: E0228 03:37:11.086532 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:11 crc kubenswrapper[4624]: E0228 03:37:11.086617 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:11 crc kubenswrapper[4624]: E0228 03:37:11.086738 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.101972 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.102334 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:37:11 crc kubenswrapper[4624]: E0228 03:37:11.102889 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.152377 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.152984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.153058 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.153156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.153232 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.256568 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.256652 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.256671 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.256703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.256727 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.360611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.360989 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.361130 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.361519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.361618 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.465596 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.465887 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.465970 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.466071 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.466228 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.480345 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:37:11 crc kubenswrapper[4624]: E0228 03:37:11.480497 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.569114 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.569178 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.569188 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.569208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.569220 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.672014 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.672108 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.672127 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.672146 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.672160 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.775653 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.775759 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.775799 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.775835 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.775858 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.878642 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.878907 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.878969 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.879038 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.879111 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.981581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.981670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.981696 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.981735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:11 crc kubenswrapper[4624]: I0228 03:37:11.981767 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:11Z","lastTransitionTime":"2026-02-28T03:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.084353 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.084423 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.084448 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.084479 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.084505 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.187023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.187062 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.187074 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.187109 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.187122 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.289685 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.289722 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.289734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.289755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.289767 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.308001 4624 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.391940 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.391989 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.392002 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.392022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.392032 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.495208 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.495279 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.495292 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.495316 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.495333 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.598223 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.598290 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.598304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.598327 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.598344 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.702566 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.702629 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.702649 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.702709 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.702735 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.784482 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.784630 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.784689 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.784752 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.784804 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.784934 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785022 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:16.784995024 +0000 UTC m=+91.449034373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785559 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785635 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785666 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785749 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785780 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:16.785741004 +0000 UTC m=+91.449780363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785783 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785867 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785917 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:16.785900378 +0000 UTC m=+91.449939717 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.785954 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:16.785938459 +0000 UTC m=+91.449977798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.786037 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: E0228 03:37:12.786172 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:16.786147704 +0000 UTC m=+91.450187243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.807318 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.807567 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.807748 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.807795 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.807961 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.911889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.911960 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.911977 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.912001 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:12 crc kubenswrapper[4624]: I0228 03:37:12.912019 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:12Z","lastTransitionTime":"2026-02-28T03:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.015704 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.015800 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.015881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.015903 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.015915 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.087074 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.087230 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.087280 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.087325 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.087332 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.087523 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.119670 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.119721 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.119734 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.119755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.119768 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.226212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.226249 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.226267 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.226291 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.226304 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.293788 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.293830 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.293842 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.293860 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.293871 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.309888 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.314677 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.314719 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.314733 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.314752 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.314764 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.325411 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.330444 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.330477 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.330486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.330503 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.330515 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.341201 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.347151 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.347223 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.347244 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.347273 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.347295 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.364285 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.370524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.370603 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.370622 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.370655 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.370675 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.388409 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:13 crc kubenswrapper[4624]: E0228 03:37:13.388614 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.392418 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.392460 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.392474 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.392495 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.392509 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.495369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.495417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.495433 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.495451 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.495465 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.598202 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.598370 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.598394 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.598425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.598443 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.701866 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.701922 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.701932 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.701953 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.701966 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.805542 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.805596 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.805611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.805633 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.805646 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.909277 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.909361 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.909380 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.909410 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:13 crc kubenswrapper[4624]: I0228 03:37:13.909431 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:13Z","lastTransitionTime":"2026-02-28T03:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.012598 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.012641 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.012650 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.012668 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.012679 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.115141 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.115196 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.115212 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.115234 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.115247 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.218753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.218820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.218837 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.218865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.218883 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.321429 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.321487 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.321497 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.321515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.321525 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.424855 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.424920 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.424930 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.424947 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.424956 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.528443 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.528493 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.528507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.528527 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.528541 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.632786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.632864 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.632881 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.632907 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.632928 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.735992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.736042 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.736059 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.736112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.736131 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.840113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.840169 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.840182 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.840224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.840239 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.945938 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.945978 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.945993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.946018 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:14 crc kubenswrapper[4624]: I0228 03:37:14.946032 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:14Z","lastTransitionTime":"2026-02-28T03:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.049169 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.049237 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.049256 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.049286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.049310 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.087306 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.087401 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:15 crc kubenswrapper[4624]: E0228 03:37:15.087572 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:15 crc kubenswrapper[4624]: E0228 03:37:15.087804 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.087965 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:15 crc kubenswrapper[4624]: E0228 03:37:15.088118 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.152466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.152536 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.152560 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.152593 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.152618 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.255036 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.255104 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.255238 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.255262 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.255275 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.358483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.358538 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.358563 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.358581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.358595 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.461061 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.461125 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.461137 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.461155 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.461166 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.564043 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.564075 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.564099 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.564114 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.564124 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.667865 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.667923 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.667942 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.667969 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.667987 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.771244 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.771301 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.771312 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.771334 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.771347 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.874562 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.874615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.874628 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.874647 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.874661 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.977614 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.977691 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.977706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.977727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:15 crc kubenswrapper[4624]: I0228 03:37:15.977741 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:15Z","lastTransitionTime":"2026-02-28T03:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.080536 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.080599 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.080612 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.080633 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.080665 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.112159 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.142795 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.161673 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.181101 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.182997 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.183058 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.183075 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.183131 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.183152 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.198309 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.218463 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.231651 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.245578 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.286000 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.286068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.286121 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.286156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.286179 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.389719 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.389803 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.389820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.389841 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.389855 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.495601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.495638 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.495647 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.495664 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.495674 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.598070 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.598391 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.598477 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.598547 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.598611 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.701578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.701856 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.701928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.702014 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.702077 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.805383 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.805688 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.805755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.805837 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.805905 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.820970 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821189 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:24.821161209 +0000 UTC m=+99.485200689 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.821182 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.821294 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.821339 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.821371 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821387 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821503 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:24.821465018 +0000 UTC m=+99.485504477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821635 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821659 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821715 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821727 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821747 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821681 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821937 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.821798 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:24.821788547 +0000 UTC m=+99.485827846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.822045 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:24.821991672 +0000 UTC m=+99.486031191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:16 crc kubenswrapper[4624]: E0228 03:37:16.822126 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:24.822070985 +0000 UTC m=+99.486110584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.908824 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.908884 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.908898 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.908919 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:16 crc kubenswrapper[4624]: I0228 03:37:16.908933 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:16Z","lastTransitionTime":"2026-02-28T03:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.012297 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.012375 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.012405 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.012440 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.012468 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.086665 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.086691 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.086822 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:17 crc kubenswrapper[4624]: E0228 03:37:17.087057 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:17 crc kubenswrapper[4624]: E0228 03:37:17.087215 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:17 crc kubenswrapper[4624]: E0228 03:37:17.087662 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.115347 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.115385 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.115395 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.115412 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.115424 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.218250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.218287 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.218304 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.218320 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.218332 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.321131 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.321169 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.321178 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.321193 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.321203 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.423586 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.423623 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.423636 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.423654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.423666 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.526122 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.526582 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.526730 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.526884 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.527032 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.630252 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.630310 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.630325 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.630348 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.630361 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.732841 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.733311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.733530 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.733728 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.733818 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.836693 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.837005 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.837120 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.837302 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.837434 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.940853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.940897 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.940908 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.940925 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:17 crc kubenswrapper[4624]: I0228 03:37:17.940937 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:17Z","lastTransitionTime":"2026-02-28T03:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.043201 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.043249 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.043263 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.043285 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.043303 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.100926 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.145635 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.145664 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.145675 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.145689 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.145698 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.250210 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.250261 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.250277 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.250298 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.250311 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.286939 4624 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.353259 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.353559 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.353625 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.353701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.353763 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.460995 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.461034 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.461043 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.461060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.461069 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.564145 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.564402 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.564465 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.564533 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.564598 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.667687 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.667744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.667761 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.667789 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.667807 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.770327 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.770389 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.770407 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.770432 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.770452 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.874312 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.874406 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.874432 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.874467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.874492 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.978389 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.978453 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.978467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.978488 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:18 crc kubenswrapper[4624]: I0228 03:37:18.978502 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:18Z","lastTransitionTime":"2026-02-28T03:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.081047 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.081105 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.081113 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.081129 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.081138 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.086454 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.086455 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:19 crc kubenswrapper[4624]: E0228 03:37:19.086558 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.086555 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:19 crc kubenswrapper[4624]: E0228 03:37:19.086643 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:19 crc kubenswrapper[4624]: E0228 03:37:19.086834 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.184224 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.184266 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.184277 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.184297 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.184310 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.213565 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mbfnv"] Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.213981 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.215947 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tb79m"] Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.216977 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.218476 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.218724 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.218885 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.219017 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.218671 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.220304 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.220479 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.224099 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.230573 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.246115 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ccd115-f935-454b-94cc-26327d5df491-mcd-auth-proxy-config\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.246163 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8ccd115-f935-454b-94cc-26327d5df491-rootfs\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.246189 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jlg\" (UniqueName: \"kubernetes.io/projected/b9b8e663-8f53-4499-af0a-2c31ce15bdbf-kube-api-access-b6jlg\") pod \"node-resolver-tb79m\" (UID: \"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\") " pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.246248 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8ccd115-f935-454b-94cc-26327d5df491-proxy-tls\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.246282 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vst7d\" (UniqueName: \"kubernetes.io/projected/a8ccd115-f935-454b-94cc-26327d5df491-kube-api-access-vst7d\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.246318 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b9b8e663-8f53-4499-af0a-2c31ce15bdbf-hosts-file\") pod \"node-resolver-tb79m\" (UID: \"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\") " pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.259335 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.272786 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.284414 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.287424 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.287454 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.287463 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.287481 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.287491 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.294187 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.315110 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.326767 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.337632 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.346773 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vst7d\" (UniqueName: \"kubernetes.io/projected/a8ccd115-f935-454b-94cc-26327d5df491-kube-api-access-vst7d\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.346807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b9b8e663-8f53-4499-af0a-2c31ce15bdbf-hosts-file\") pod \"node-resolver-tb79m\" (UID: \"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\") " pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.346824 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ccd115-f935-454b-94cc-26327d5df491-mcd-auth-proxy-config\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.346841 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8ccd115-f935-454b-94cc-26327d5df491-rootfs\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.346858 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6jlg\" (UniqueName: \"kubernetes.io/projected/b9b8e663-8f53-4499-af0a-2c31ce15bdbf-kube-api-access-b6jlg\") pod \"node-resolver-tb79m\" (UID: \"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\") " pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.346893 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8ccd115-f935-454b-94cc-26327d5df491-proxy-tls\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.347040 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b9b8e663-8f53-4499-af0a-2c31ce15bdbf-hosts-file\") pod \"node-resolver-tb79m\" (UID: \"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\") " pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.347244 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8ccd115-f935-454b-94cc-26327d5df491-rootfs\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.347916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ccd115-f935-454b-94cc-26327d5df491-mcd-auth-proxy-config\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.356115 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8ccd115-f935-454b-94cc-26327d5df491-proxy-tls\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.358865 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.368986 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.374821 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vst7d\" (UniqueName: \"kubernetes.io/projected/a8ccd115-f935-454b-94cc-26327d5df491-kube-api-access-vst7d\") pod \"machine-config-daemon-mbfnv\" (UID: \"a8ccd115-f935-454b-94cc-26327d5df491\") " pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.379262 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6jlg\" (UniqueName: \"kubernetes.io/projected/b9b8e663-8f53-4499-af0a-2c31ce15bdbf-kube-api-access-b6jlg\") pod \"node-resolver-tb79m\" (UID: \"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\") " pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.382709 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.390985 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.391286 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.391498 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.391690 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.391877 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.397915 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.412137 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.426163 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.438600 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.453973 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.467803 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.479515 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.487996 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.495511 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.495581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.495601 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.495631 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.495653 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.498261 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.520283 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.538896 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.546649 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tb79m" Feb 28 03:37:19 crc kubenswrapper[4624]: W0228 03:37:19.564104 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b8e663_8f53_4499_af0a_2c31ce15bdbf.slice/crio-6f1094df59612c163377aa8779c1e62e599915f35110cbbbae0a175ad112b07f WatchSource:0}: Error finding container 6f1094df59612c163377aa8779c1e62e599915f35110cbbbae0a175ad112b07f: Status 404 returned error can't find the container with id 6f1094df59612c163377aa8779c1e62e599915f35110cbbbae0a175ad112b07f Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.599694 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.599740 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.599752 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.599777 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.599789 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.613968 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-p5wwn"] Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.614334 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.615877 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-82tzq"] Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.617437 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.617781 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.617939 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.618097 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.619656 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hd6z8"] Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.619921 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.620759 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.621134 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.632873 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.633129 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.633585 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.633767 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.634109 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.634382 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.635016 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.637139 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.637310 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.645800 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.649859 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-cni-binary-copy\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650001 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cnibin\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650146 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-tuning-conf-dir\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650291 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-kubelet\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650393 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-node-log\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650496 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-socket-dir-parent\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650612 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-cni-bin\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650692 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-conf-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cni-binary-copy\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650840 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-systemd\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650918 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-slash\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.650989 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-netd\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651057 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-k8s-cni-cncf-io\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651148 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-hostroot\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651218 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-daemon-config\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651470 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-var-lib-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651550 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-script-lib\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651623 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhhz\" (UniqueName: \"kubernetes.io/projected/54aef42d-7730-464b-90c7-1d8bdf5e622c-kube-api-access-mhhhz\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651703 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-os-release\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651857 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-env-overrides\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.651936 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovn-node-metrics-cert\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.653256 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-ovn\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.653358 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-cni-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.653877 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-etc-kubernetes\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654197 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-systemd-units\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654313 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-netns\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654395 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-config\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654472 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nq2\" (UniqueName: \"kubernetes.io/projected/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-kube-api-access-v6nq2\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654552 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654634 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-netns\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654981 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-kubelet\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655249 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-system-cni-dir\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655336 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-bin\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655495 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wsnv\" (UniqueName: \"kubernetes.io/projected/8e4d0d22-89aa-4582-9922-47fbe84c7a78-kube-api-access-7wsnv\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-log-socket\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655662 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-etc-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655816 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-system-cni-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.654867 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.655902 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-multus-certs\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.656418 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-os-release\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.656471 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-cni-multus\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.656495 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.656532 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-cnibin\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.694105 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.710705 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.710773 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.710787 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.710831 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.710847 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.712142 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.741617 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-etc-kubernetes\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763648 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-systemd-units\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763667 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-netns\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763683 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-config\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763701 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nq2\" (UniqueName: \"kubernetes.io/projected/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-kube-api-access-v6nq2\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763722 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763739 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-netns\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763754 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-kubelet\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763774 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-system-cni-dir\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763789 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-bin\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763780 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-etc-kubernetes\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763847 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763879 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763892 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wsnv\" (UniqueName: \"kubernetes.io/projected/8e4d0d22-89aa-4582-9922-47fbe84c7a78-kube-api-access-7wsnv\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763941 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-log-socket\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.763993 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-etc-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764018 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764047 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-system-cni-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764071 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-multus-certs\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764125 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-os-release\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-cni-multus\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764183 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764220 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-cnibin\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764246 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-netns\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764255 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-cni-binary-copy\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764276 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-kubelet\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764281 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cnibin\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764299 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-system-cni-dir\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-tuning-conf-dir\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764324 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-bin\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764365 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-kubelet\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764388 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-systemd-units\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-node-log\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764413 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-netns\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764437 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-socket-dir-parent\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-cni-bin\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764511 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-conf-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764626 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cnibin\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764683 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-multus-certs\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764768 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-os-release\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764802 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-cni-multus\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.765110 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-config\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.765269 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-log-socket\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.765294 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-etc-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.765315 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.765350 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-cnibin\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.764370 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-system-cni-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766035 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766221 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-socket-dir-parent\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766309 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-conf-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766306 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-node-log\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766378 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cni-binary-copy\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766323 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-kubelet\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766410 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-systemd\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766469 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-systemd\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766929 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.766471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-slash\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767292 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-netd\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-k8s-cni-cncf-io\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767322 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e4d0d22-89aa-4582-9922-47fbe84c7a78-cni-binary-copy\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-hostroot\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767389 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-netd\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-daemon-config\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767431 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-run-k8s-cni-cncf-io\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767466 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-hostroot\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767473 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-var-lib-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767498 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-script-lib\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767548 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhhhz\" (UniqueName: \"kubernetes.io/projected/54aef42d-7730-464b-90c7-1d8bdf5e622c-kube-api-access-mhhhz\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767577 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-os-release\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767629 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-env-overrides\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767656 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovn-node-metrics-cert\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767710 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-ovn\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767734 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-cni-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767943 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-cni-dir\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767977 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-multus-daemon-config\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.767986 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-var-lib-openvswitch\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.768053 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-os-release\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.768672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-script-lib\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.768714 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-ovn\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.768754 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-host-var-lib-cni-bin\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.768812 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-env-overrides\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.768887 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-slash\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.769788 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e4d0d22-89aa-4582-9922-47fbe84c7a78-tuning-conf-dir\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.770030 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-cni-binary-copy\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.777927 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovn-node-metrics-cert\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.788981 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.792603 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wsnv\" (UniqueName: \"kubernetes.io/projected/8e4d0d22-89aa-4582-9922-47fbe84c7a78-kube-api-access-7wsnv\") pod \"multus-additional-cni-plugins-82tzq\" (UID: \"8e4d0d22-89aa-4582-9922-47fbe84c7a78\") " pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.797985 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nq2\" (UniqueName: \"kubernetes.io/projected/e8725d1d-2c0b-4f59-8489-f5f38f8e4d77-kube-api-access-v6nq2\") pod \"multus-p5wwn\" (UID: \"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\") " pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.802494 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhhhz\" (UniqueName: \"kubernetes.io/projected/54aef42d-7730-464b-90c7-1d8bdf5e622c-kube-api-access-mhhhz\") pod \"ovnkube-node-hd6z8\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.812955 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.812992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.813003 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.813022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.813032 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.821181 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.842621 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.856289 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.868788 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.880696 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.892029 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.906542 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.916176 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.916840 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.916886 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.916898 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.916914 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.916925 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:19Z","lastTransitionTime":"2026-02-28T03:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.925492 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.935473 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.949282 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p5wwn" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.952176 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.961763 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: W0228 03:37:19.961959 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8725d1d_2c0b_4f59_8489_f5f38f8e4d77.slice/crio-c33d1b6d1fbbb262ba508a98f690356c3019dd2a008a317741c5ade10021f752 WatchSource:0}: Error finding container c33d1b6d1fbbb262ba508a98f690356c3019dd2a008a317741c5ade10021f752: Status 404 returned error can't find the container with id c33d1b6d1fbbb262ba508a98f690356c3019dd2a008a317741c5ade10021f752 Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.968798 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-82tzq" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.979722 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:19 crc kubenswrapper[4624]: W0228 03:37:19.982456 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e4d0d22_89aa_4582_9922_47fbe84c7a78.slice/crio-328c35ecfca7ae51b1ef7e6c0b23339a16f73ef4279614ddc9b241d8d277e2dd WatchSource:0}: Error finding container 328c35ecfca7ae51b1ef7e6c0b23339a16f73ef4279614ddc9b241d8d277e2dd: Status 404 returned error can't find the container with id 328c35ecfca7ae51b1ef7e6c0b23339a16f73ef4279614ddc9b241d8d277e2dd Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.991290 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:19 crc kubenswrapper[4624]: I0228 03:37:19.992434 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.002735 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.019047 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.020522 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.020557 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.020565 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.020586 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.020597 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.032229 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.045973 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.053315 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.123467 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.123535 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.123548 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.123589 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.123603 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.226654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.226689 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.226698 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.226714 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.226724 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.330335 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.330395 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.330409 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.330428 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.330443 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.433406 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.433798 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.433915 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.434060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.434197 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.507575 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tb79m" event={"ID":"b9b8e663-8f53-4499-af0a-2c31ce15bdbf","Type":"ContainerStarted","Data":"6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.509206 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tb79m" event={"ID":"b9b8e663-8f53-4499-af0a-2c31ce15bdbf","Type":"ContainerStarted","Data":"6f1094df59612c163377aa8779c1e62e599915f35110cbbbae0a175ad112b07f"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.509267 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.510009 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" exitCode=0 Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.510064 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.510150 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"979ebb3d7c3678d4131030b32d99891f3b710cb32fadc7463cd1a555e7e7d56f"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.512297 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e4d0d22-89aa-4582-9922-47fbe84c7a78" containerID="8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275" exitCode=0 Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.512365 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerDied","Data":"8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.512391 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerStarted","Data":"328c35ecfca7ae51b1ef7e6c0b23339a16f73ef4279614ddc9b241d8d277e2dd"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.515434 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p5wwn" event={"ID":"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77","Type":"ContainerStarted","Data":"cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.515473 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p5wwn" event={"ID":"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77","Type":"ContainerStarted","Data":"c33d1b6d1fbbb262ba508a98f690356c3019dd2a008a317741c5ade10021f752"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.517595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.517625 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.517639 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"d9a21d0e1cd7ec90b0ad715e433d42b3e8116d02d7d6dbde8b5d67983fabd122"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.530247 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.536917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.536964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.536976 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.536996 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.537050 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.560628 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.577735 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.592708 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.606699 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.630570 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.639742 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.639782 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.639795 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.639815 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.639829 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.649658 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.660387 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.672128 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.687674 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.697668 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.709558 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.719716 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.758851 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.761656 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.761714 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.761730 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.761750 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.761764 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.768582 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.781132 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.794863 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.807070 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.819880 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.836605 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.858473 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.866998 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.867064 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.867098 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.867123 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.867136 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.871795 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.882237 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.891680 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.905475 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.923423 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.934641 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.952925 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.970529 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.970609 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.970623 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.970644 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:20 crc kubenswrapper[4624]: I0228 03:37:20.970659 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:20Z","lastTransitionTime":"2026-02-28T03:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.073206 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.073269 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.073283 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.073306 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.073319 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.086258 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.086325 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.086513 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:21 crc kubenswrapper[4624]: E0228 03:37:21.086664 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:21 crc kubenswrapper[4624]: E0228 03:37:21.086764 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:21 crc kubenswrapper[4624]: E0228 03:37:21.086869 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.176513 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.177108 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.177156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.177181 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.177211 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.279775 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.279841 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.279856 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.279880 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.279899 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.383706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.383762 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.383772 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.383790 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.383801 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.487242 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.487282 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.487294 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.487311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.487323 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.528025 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.530609 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.530729 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.535925 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e4d0d22-89aa-4582-9922-47fbe84c7a78" containerID="5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44" exitCode=0 Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.536027 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerDied","Data":"5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.548715 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.572039 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.585286 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.596815 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.596863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.596872 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.596890 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.596903 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.602964 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.619977 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.640319 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.660771 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.678978 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.694381 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.704066 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.704112 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.704120 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.704136 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.704146 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.709018 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.721539 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.736024 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.752622 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.771647 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.807279 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.807420 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.807504 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.807591 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.807661 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.910407 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.910457 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.910471 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.910509 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:21 crc kubenswrapper[4624]: I0228 03:37:21.910523 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:21Z","lastTransitionTime":"2026-02-28T03:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.013245 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.013305 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.013316 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.013336 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.013348 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.116179 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.116241 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.116255 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.116276 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.116287 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.219407 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.219480 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.219492 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.219514 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.219538 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.274643 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kj8sv"] Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.275308 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.278947 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.279355 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.279782 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.279995 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.296229 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.301027 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7k9\" (UniqueName: \"kubernetes.io/projected/ece366ab-a1b1-4116-a208-b1a1a3551c7c-kube-api-access-sf7k9\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.301149 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ece366ab-a1b1-4116-a208-b1a1a3551c7c-serviceca\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.301245 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ece366ab-a1b1-4116-a208-b1a1a3551c7c-host\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.322021 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.322115 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.322137 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.322165 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.322183 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.338341 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.358714 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.379899 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.401743 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ece366ab-a1b1-4116-a208-b1a1a3551c7c-serviceca\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.401807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ece366ab-a1b1-4116-a208-b1a1a3551c7c-host\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.401835 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf7k9\" (UniqueName: \"kubernetes.io/projected/ece366ab-a1b1-4116-a208-b1a1a3551c7c-kube-api-access-sf7k9\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.402700 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ece366ab-a1b1-4116-a208-b1a1a3551c7c-host\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.403514 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ece366ab-a1b1-4116-a208-b1a1a3551c7c-serviceca\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.420895 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf7k9\" (UniqueName: \"kubernetes.io/projected/ece366ab-a1b1-4116-a208-b1a1a3551c7c-kube-api-access-sf7k9\") pod \"node-ca-kj8sv\" (UID: \"ece366ab-a1b1-4116-a208-b1a1a3551c7c\") " pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.424836 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.425023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.425099 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.425185 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.425251 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.430810 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.448599 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.458909 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.470142 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.481132 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.491576 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.502134 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.509113 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.518331 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.527248 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.527299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.527311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.527331 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.527347 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.528298 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.540481 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kj8sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ece366ab-a1b1-4116-a208-b1a1a3551c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf7k9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kj8sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.542462 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.542520 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.542532 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.544856 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e4d0d22-89aa-4582-9922-47fbe84c7a78" containerID="e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a" exitCode=0 Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.544894 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerDied","Data":"e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.557777 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.572878 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.585215 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.589188 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kj8sv" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.600351 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.610228 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.625456 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.629415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.629505 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.629561 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.629624 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.629679 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.636067 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.649432 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.662965 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.674897 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.690019 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.701987 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.711113 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.721233 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.728070 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kj8sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ece366ab-a1b1-4116-a208-b1a1a3551c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf7k9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kj8sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.740231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.740264 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.740274 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.740294 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.740308 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.844579 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.844628 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.844643 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.844662 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.844678 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.947571 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.947615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.947625 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.947650 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:22 crc kubenswrapper[4624]: I0228 03:37:22.947662 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:22Z","lastTransitionTime":"2026-02-28T03:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.050454 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.050488 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.050497 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.050514 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.050524 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.086437 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.086596 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.086558 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.086705 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.086768 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.086877 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.152816 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.152858 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.152866 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.152880 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.152891 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.257802 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.258411 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.258436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.258466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.258489 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.362257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.362338 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.362363 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.362396 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.362416 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.464894 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.464930 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.464943 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.464962 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.464973 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.550658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kj8sv" event={"ID":"ece366ab-a1b1-4116-a208-b1a1a3551c7c","Type":"ContainerStarted","Data":"86c3b7b8084674eee60c57f165b4f01016435569385c3bebc1d9f69dcc5c693d"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.550759 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kj8sv" event={"ID":"ece366ab-a1b1-4116-a208-b1a1a3551c7c","Type":"ContainerStarted","Data":"ea5f241fb98f722dfe98bb769e6ec265f072ca03307b487f398797a439b74b61"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.556292 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e4d0d22-89aa-4582-9922-47fbe84c7a78" containerID="677a1582700632050a196cbc9cb9286dd794f329461bebc18fbb2985fb2aa759" exitCode=0 Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.556349 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerDied","Data":"677a1582700632050a196cbc9cb9286dd794f329461bebc18fbb2985fb2aa759"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.568315 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.568425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.568478 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.568496 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.568526 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.568549 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.592073 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.614031 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.628726 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.649501 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.665877 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.671263 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.671299 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.671308 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.671324 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.671335 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.681151 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.695672 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.711853 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.727035 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.743102 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.743147 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.743156 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.743171 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.743180 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.752665 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.757423 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.765008 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.772678 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.772712 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.772722 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.772740 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.772750 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.781288 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.783693 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.788990 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.789019 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.789029 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.789048 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.789060 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.791658 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kj8sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ece366ab-a1b1-4116-a208-b1a1a3551c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86c3b7b8084674eee60c57f165b4f01016435569385c3bebc1d9f69dcc5c693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf7k9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kj8sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.801891 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.801933 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.807677 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.807728 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.807739 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.807805 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.807818 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.812480 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tb79m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b8e663-8f53-4499-af0a-2c31ce15bdbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b3c2bccc132622a502dbfc9f3ca646483b1ffd6f18608b45dffe4d84f777f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tb79m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.819404 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.824524 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.825424 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.825486 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.825498 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.825517 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.825529 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.837745 4624 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2093f8b2-b9e5-423f-9f72-24050fa8f25c\\\",\\\"systemUUID\\\":\\\"b8a128da-abb3-432c-b92e-2d237967b814\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: E0228 03:37:23.838031 4624 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.838017 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.850505 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.851359 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.851409 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.851422 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.851441 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.851455 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.863067 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.873357 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.885691 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p5wwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6nq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p5wwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.894299 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kj8sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ece366ab-a1b1-4116-a208-b1a1a3551c7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86c3b7b8084674eee60c57f165b4f01016435569385c3bebc1d9f69dcc5c693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sf7k9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kj8sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.909267 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.921660 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.934419 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.948982 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a1582700632050a196cbc9cb9286dd794f329461bebc18fbb2985fb2aa759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677a1582700632050a196cbc9cb9286dd794f329461bebc18fbb2985fb2aa759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.954332 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.954380 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.954396 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.954415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.954428 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:23Z","lastTransitionTime":"2026-02-28T03:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.964844 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.974484 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:23 crc kubenswrapper[4624]: I0228 03:37:23.990895 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.057846 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.057889 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.057899 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.057920 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.057937 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.160709 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.160754 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.160793 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.160820 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.160832 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.264578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.264744 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.264862 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.264978 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.265225 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.367607 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.367640 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.367653 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.367667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.367677 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.469981 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.470015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.470023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.470039 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.470049 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.561646 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"21956692d08292df78b2c39bcdbef1a98cef185dbdd8fd9dc57251247cf7eaba"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.564238 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7d12c1f75cad1d3a54ac1a1bd3c4e13909a4c9686fc72e717578ea2f508022e6"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.564295 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"047ad02fcc0fb5f947f0b54066b767354ae0c401236c32c051e2d2243d660fb8"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.567727 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e4d0d22-89aa-4582-9922-47fbe84c7a78" containerID="4d40ac1dbe20efb09643a3f27757082aa48244c511cdbffd21020a14eba3e5d4" exitCode=0 Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.567801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerDied","Data":"4d40ac1dbe20efb09643a3f27757082aa48244c511cdbffd21020a14eba3e5d4"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.574890 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.574958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.574972 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.574992 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.575006 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.576911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.581696 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.595568 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4d0d22-89aa-4582-9922-47fbe84c7a78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7335077932bb49877294edccbfac53e0318d4a9ccc729f0efebd69bf82b275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f99a81c7dd552467d0a2bf155249ffc074f8190c60dde84e38b9db429af1e44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e80920a3d41e4f2ec2a662baeda1ef8879abac66474c76a82aa5e4b619201e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a1582700632050a196cbc9cb9286dd794f329461bebc18fbb2985fb2aa759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677a1582700632050a196cbc9cb9286dd794f329461bebc18fbb2985fb2aa759\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wsnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-82tzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.616834 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54aef42d-7730-464b-90c7-1d8bdf5e622c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhhhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hd6z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.625341 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb03168-b6ad-4b2b-84fd-9079db51ae0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed7e78ce87f7bf5efaa1f49db32b12298523c521f0d1719990bca7ef95c3be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf2a8fdacaee1ca5cb539e795a4e6c4ed71a8b9d34ded4d70dff7b14d876db85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.646798 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"becfbaef-8442-41cf-9913-922717964529\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:36:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560408e3c82c5d791f9a469ab506b2222c5bf26ce2336e73d9f8b48c077dfadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84e73e698382adbf041a6c145d55b25b7436802d65f8a85fb201e143af91ccf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55ce593e58b882b237826887ea086cd7bcd9b08770daeba96031e312912f5376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24addd15bb2766ef7775e3f811be403e40856d9d7ab8724a0397acb0a59da134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b15fff3a8661605c5e0f26b0c8ef1f716c26bffb38e6d28659de55686b61434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff82f7359d29dcdcc65e0d22d528710f6e3c02997758da7ed6108b683d055c4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9009f1318f6b1634933eac5601fda316df27be9c338feda4e8e2abd4cae22a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc66092b2c89e9963ab693023695137092a6048bc337f160a7f3149749b8b62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.656144 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.665263 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ccd115-f935-454b-94cc-26327d5df491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67cb6e0f1bd7b3e9390233b0c9772730df36018aa7294606640e5468197f4a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vst7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mbfnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.675882 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T03:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T03:36:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 03:36:53.844804 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 03:36:53.844937 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 03:36:53.845605 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2726048636/tls.crt::/tmp/serving-cert-2726048636/tls.key\\\\\\\"\\\\nI0228 03:36:54.198038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 03:36:54.200909 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 03:36:54.200933 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 03:36:54.200954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 03:36:54.200958 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 03:36:54.205885 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 03:36:54.205906 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 03:36:54.205913 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205919 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 03:36:54.205925 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 03:36:54.205929 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 03:36:54.205932 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 03:36:54.205936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 03:36:54.208844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T03:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:35:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:35:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T03:35:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.677436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.677494 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.677507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.677529 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.677545 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.686119 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c68b7a0716cd16a2d5eedcf834d040c41aa486d42c4d5864af1dc45c7993c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T03:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.694931 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:37:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.757641 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tb79m" podStartSLOduration=39.757605847 podStartE2EDuration="39.757605847s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:24.736834694 +0000 UTC m=+99.400874003" watchObservedRunningTime="2026-02-28 03:37:24.757605847 +0000 UTC m=+99.421645156" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.774122 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-p5wwn" podStartSLOduration=39.774063753 podStartE2EDuration="39.774063753s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:24.772357577 +0000 UTC m=+99.436396876" watchObservedRunningTime="2026-02-28 03:37:24.774063753 +0000 UTC m=+99.438103052" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.783900 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.783945 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.783956 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.783978 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.783990 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.810652 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kj8sv" podStartSLOduration=39.810625415 podStartE2EDuration="39.810625415s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:24.791118376 +0000 UTC m=+99.455157725" watchObservedRunningTime="2026-02-28 03:37:24.810625415 +0000 UTC m=+99.474664734" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.831437 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.831559 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.831601 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.831638 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.831663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.831760 4624 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.831831 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.831813579 +0000 UTC m=+115.495852898 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.831917 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.831905673 +0000 UTC m=+115.495945002 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832021 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832044 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832059 4624 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832114 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.832105398 +0000 UTC m=+115.496144717 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832179 4624 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832182 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832209 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.83219899 +0000 UTC m=+115.496238309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832210 4624 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832236 4624 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:24 crc kubenswrapper[4624]: E0228 03:37:24.832270 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.832262562 +0000 UTC m=+115.496301881 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.861568 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podStartSLOduration=39.861542416 podStartE2EDuration="39.861542416s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:24.838996885 +0000 UTC m=+99.503036204" watchObservedRunningTime="2026-02-28 03:37:24.861542416 +0000 UTC m=+99.525581715" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.886987 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.887035 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.887044 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.887060 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.887071 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.926392 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.926367244 podStartE2EDuration="15.926367244s" podCreationTimestamp="2026-02-28 03:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:24.924917525 +0000 UTC m=+99.588956834" watchObservedRunningTime="2026-02-28 03:37:24.926367244 +0000 UTC m=+99.590406553" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.926685 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.926682312 podStartE2EDuration="6.926682312s" podCreationTimestamp="2026-02-28 03:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:24.898317853 +0000 UTC m=+99.562357162" watchObservedRunningTime="2026-02-28 03:37:24.926682312 +0000 UTC m=+99.590721621" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.990141 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.990192 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.990200 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.990213 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:24 crc kubenswrapper[4624]: I0228 03:37:24.990222 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:24Z","lastTransitionTime":"2026-02-28T03:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.027046 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz"] Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.027851 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.030611 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.032854 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.056014 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-85p9r"] Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.056839 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.056920 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-85p9r" podUID="6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.086165 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.086188 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.086328 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.086777 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.086867 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.086926 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.092461 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.092507 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.092516 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.092532 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.092542 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.134878 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bda21a1b-ad56-451e-8d97-b5f153e47177-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.134932 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bda21a1b-ad56-451e-8d97-b5f153e47177-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.134963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bda21a1b-ad56-451e-8d97-b5f153e47177-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.135196 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqfh\" (UniqueName: \"kubernetes.io/projected/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-kube-api-access-gzqfh\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.135405 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlhtq\" (UniqueName: \"kubernetes.io/projected/bda21a1b-ad56-451e-8d97-b5f153e47177-kube-api-access-jlhtq\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.135514 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.195863 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.195894 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.195902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.195917 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.195926 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.236990 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bda21a1b-ad56-451e-8d97-b5f153e47177-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.237053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bda21a1b-ad56-451e-8d97-b5f153e47177-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.237110 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bda21a1b-ad56-451e-8d97-b5f153e47177-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.237141 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzqfh\" (UniqueName: \"kubernetes.io/projected/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-kube-api-access-gzqfh\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.237167 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlhtq\" (UniqueName: \"kubernetes.io/projected/bda21a1b-ad56-451e-8d97-b5f153e47177-kube-api-access-jlhtq\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.237187 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.237996 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bda21a1b-ad56-451e-8d97-b5f153e47177-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.238078 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.238153 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs podName:6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:25.738136359 +0000 UTC m=+100.402175668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs") pod "network-metrics-daemon-85p9r" (UID: "6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.238646 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bda21a1b-ad56-451e-8d97-b5f153e47177-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.248211 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bda21a1b-ad56-451e-8d97-b5f153e47177-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.261064 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlhtq\" (UniqueName: \"kubernetes.io/projected/bda21a1b-ad56-451e-8d97-b5f153e47177-kube-api-access-jlhtq\") pod \"ovnkube-control-plane-749d76644c-8x2zz\" (UID: \"bda21a1b-ad56-451e-8d97-b5f153e47177\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.268688 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzqfh\" (UniqueName: \"kubernetes.io/projected/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-kube-api-access-gzqfh\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.298755 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.298805 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.298818 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.298837 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.298849 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.339520 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" Feb 28 03:37:25 crc kubenswrapper[4624]: W0228 03:37:25.354865 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbda21a1b_ad56_451e_8d97_b5f153e47177.slice/crio-3502394f79e5f9b086ad49fa8ebffa395be7d386156521514b8fc240e02455e4 WatchSource:0}: Error finding container 3502394f79e5f9b086ad49fa8ebffa395be7d386156521514b8fc240e02455e4: Status 404 returned error can't find the container with id 3502394f79e5f9b086ad49fa8ebffa395be7d386156521514b8fc240e02455e4 Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.401590 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.401655 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.401677 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.401707 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.401726 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.504525 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.504578 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.504588 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.504606 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.504618 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.582553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" event={"ID":"bda21a1b-ad56-451e-8d97-b5f153e47177","Type":"ContainerStarted","Data":"3502394f79e5f9b086ad49fa8ebffa395be7d386156521514b8fc240e02455e4"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.586665 4624 generic.go:334] "Generic (PLEG): container finished" podID="8e4d0d22-89aa-4582-9922-47fbe84c7a78" containerID="b07ecd86b57f87c91e3ad419a307a86881b1b206745e5d3c5a5d043ff4dd54f0" exitCode=0 Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.586717 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerDied","Data":"b07ecd86b57f87c91e3ad419a307a86881b1b206745e5d3c5a5d043ff4dd54f0"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.609672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.609703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.609712 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.609727 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.609737 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.714558 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.714629 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.714643 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.714667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.714680 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.744716 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.744974 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:25 crc kubenswrapper[4624]: E0228 03:37:25.745109 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs podName:6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:26.745062507 +0000 UTC m=+101.409102026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs") pod "network-metrics-daemon-85p9r" (UID: "6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.817229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.817282 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.817293 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.817311 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.817324 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.922325 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.922409 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.922437 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.922470 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:25 crc kubenswrapper[4624]: I0228 03:37:25.922501 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:25Z","lastTransitionTime":"2026-02-28T03:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.025713 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.026187 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.026203 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.026229 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.026244 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.089412 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:37:26 crc kubenswrapper[4624]: E0228 03:37:26.090284 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.130257 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.130322 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.130341 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.130368 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.130387 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.233615 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.233664 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.233682 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.233706 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.233722 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.337015 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.337239 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.337341 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.337465 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.337573 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.452436 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.452514 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.452531 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.452557 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.452574 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.554984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.555022 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.555031 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.555047 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.555059 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.600031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerStarted","Data":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.601075 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.601118 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.601166 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.608095 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-82tzq" event={"ID":"8e4d0d22-89aa-4582-9922-47fbe84c7a78","Type":"ContainerStarted","Data":"53edae488b6888d1c2dd4c8f4f8973ad5f14fae9837e981239b95b5c0c6f6423"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.609601 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" event={"ID":"bda21a1b-ad56-451e-8d97-b5f153e47177","Type":"ContainerStarted","Data":"45f66d091ed918c937088abe572a8b4c7201bd9cb3810d758d954279d51867a5"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.609628 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" event={"ID":"bda21a1b-ad56-451e-8d97-b5f153e47177","Type":"ContainerStarted","Data":"3bd98d720ff43d078aafb67cb6168eb44f9757408fdfdb0fcc4995bc2508eaa1"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.630651 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.631661 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.657265 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.657322 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.657333 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.657352 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.657364 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.666514 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podStartSLOduration=41.666487764 podStartE2EDuration="41.666487764s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:26.63423664 +0000 UTC m=+101.298275969" watchObservedRunningTime="2026-02-28 03:37:26.666487764 +0000 UTC m=+101.330527073" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.708384 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-82tzq" podStartSLOduration=41.70834842 podStartE2EDuration="41.70834842s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:26.692110979 +0000 UTC m=+101.356150288" watchObservedRunningTime="2026-02-28 03:37:26.70834842 +0000 UTC m=+101.372387729" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.709533 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8x2zz" podStartSLOduration=40.709527302 podStartE2EDuration="40.709527302s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:26.708130104 +0000 UTC m=+101.372169413" watchObservedRunningTime="2026-02-28 03:37:26.709527302 +0000 UTC m=+101.373566601" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.756158 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:26 crc kubenswrapper[4624]: E0228 03:37:26.756543 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:26 crc kubenswrapper[4624]: E0228 03:37:26.756769 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs podName:6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:28.756715691 +0000 UTC m=+103.420755190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs") pod "network-metrics-daemon-85p9r" (UID: "6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.760531 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.760576 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.760611 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.760636 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.760654 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.862926 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.862961 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.862970 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.862984 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.862996 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.969178 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.969222 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.969231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.969250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:26 crc kubenswrapper[4624]: I0228 03:37:26.969259 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:26Z","lastTransitionTime":"2026-02-28T03:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.071921 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.071964 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.071973 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.071988 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.071997 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.086456 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:27 crc kubenswrapper[4624]: E0228 03:37:27.086554 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.086868 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:27 crc kubenswrapper[4624]: E0228 03:37:27.086974 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-85p9r" podUID="6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.087030 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:27 crc kubenswrapper[4624]: E0228 03:37:27.087118 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.087175 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:27 crc kubenswrapper[4624]: E0228 03:37:27.087238 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.174414 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.174457 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.174466 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.174483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.174493 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.277217 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.277287 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.277307 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.277338 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.277359 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.379845 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.380339 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.380371 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.380402 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.380426 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.483942 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.484014 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.484038 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.484068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.484125 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.587531 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.587622 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.587640 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.587665 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.587683 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.691250 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.691323 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.691337 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.691362 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.691378 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.794272 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.794343 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.794359 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.794386 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.794404 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.897427 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.897476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.897493 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.897519 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:27 crc kubenswrapper[4624]: I0228 03:37:27.897537 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:27Z","lastTransitionTime":"2026-02-28T03:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.000703 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.000773 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.000791 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.000819 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.000838 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.109135 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.109209 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.109231 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.109262 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.109282 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.211804 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.211866 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.211883 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.211902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.211916 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.314384 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.314425 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.314438 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.314454 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.314466 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.416961 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.417025 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.417041 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.417067 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.417100 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.520296 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.520369 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.520380 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.520402 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.520434 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.622994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.623128 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.623158 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.623195 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.623216 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.724390 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-85p9r"] Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.724552 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:28 crc kubenswrapper[4624]: E0228 03:37:28.724686 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-85p9r" podUID="6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.727342 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.727395 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.727413 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.727438 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.727459 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.778953 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:28 crc kubenswrapper[4624]: E0228 03:37:28.779220 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:28 crc kubenswrapper[4624]: E0228 03:37:28.779349 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs podName:6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:32.779322872 +0000 UTC m=+107.443362211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs") pod "network-metrics-daemon-85p9r" (UID: "6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.830346 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.830401 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.830417 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.830461 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.830473 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.933387 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.933442 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.933462 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.933485 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:28 crc kubenswrapper[4624]: I0228 03:37:28.933503 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:28Z","lastTransitionTime":"2026-02-28T03:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.035667 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.035735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.035757 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.035785 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.035804 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.087068 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.087132 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.087190 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:29 crc kubenswrapper[4624]: E0228 03:37:29.087281 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:29 crc kubenswrapper[4624]: E0228 03:37:29.087425 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:29 crc kubenswrapper[4624]: E0228 03:37:29.087534 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.139513 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.139564 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.139605 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.139633 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.139651 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.242654 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.242764 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.242778 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.242801 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.242816 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.347279 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.347328 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.347339 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.347363 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.347376 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.451029 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.451068 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.451079 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.451115 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.451130 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.553483 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.553515 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.553525 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.553541 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.553550 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.656597 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.656641 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.656679 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.656699 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.656712 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.760625 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.760697 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.760716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.760747 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.760766 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.864415 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.864464 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.864476 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.864497 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.864512 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.967781 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.967823 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.967832 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.967851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:29 crc kubenswrapper[4624]: I0228 03:37:29.967862 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:29Z","lastTransitionTime":"2026-02-28T03:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.070581 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.070617 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.070628 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.070646 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.070658 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.172892 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.172958 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.172980 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.173024 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.173043 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.277928 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.277981 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.277998 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.278023 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.278041 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.381786 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.381860 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.381873 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.381891 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.381901 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.485701 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.485753 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.485770 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.485792 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.485803 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.589592 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.589643 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.589663 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.589691 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.589710 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.693172 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.693266 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.693279 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.693296 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.693308 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.797427 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.797511 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.797534 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.797576 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.797601 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.901365 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.901431 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.901448 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.901474 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:30 crc kubenswrapper[4624]: I0228 03:37:30.901494 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:30Z","lastTransitionTime":"2026-02-28T03:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.004583 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.004902 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.004989 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.005102 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.005207 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:31Z","lastTransitionTime":"2026-02-28T03:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.086332 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:31 crc kubenswrapper[4624]: E0228 03:37:31.086563 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.087008 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:31 crc kubenswrapper[4624]: E0228 03:37:31.087149 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-85p9r" podUID="6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.087178 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:31 crc kubenswrapper[4624]: E0228 03:37:31.087326 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.087206 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:31 crc kubenswrapper[4624]: E0228 03:37:31.087497 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.115524 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.115575 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.115589 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.115610 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.115665 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:31Z","lastTransitionTime":"2026-02-28T03:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.218936 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.218994 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.219008 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.219028 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.219040 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:31Z","lastTransitionTime":"2026-02-28T03:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.322672 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.322720 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.322735 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.322758 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.322772 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:31Z","lastTransitionTime":"2026-02-28T03:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.425851 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.425931 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.425954 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.425993 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.426019 4624 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T03:37:31Z","lastTransitionTime":"2026-02-28T03:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.529380 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.529716 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.529789 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.529853 4624 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.530019 4624 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.577032 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nhzzm"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.577798 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.578070 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.578359 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tztw9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.578471 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.579402 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.593661 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.597562 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.597847 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.598832 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xk27r"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.599753 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.598070 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.597758 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.600279 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.600450 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.599530 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594382 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594450 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594487 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594513 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594537 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.600160 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594651 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594683 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594710 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.602719 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594737 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594765 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594796 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594828 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594930 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.597256 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.594334 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.599346 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.599398 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.599442 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.606904 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x6j2h"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.607445 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.610571 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.613293 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pcvf9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.614017 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.616002 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.616708 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.617044 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.617258 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.617411 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.622831 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.623194 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.623389 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.628400 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.628698 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.628946 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.629155 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.629325 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.629487 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.629675 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.629982 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.630154 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.630270 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nhzzm"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.645628 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.652601 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.654623 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.660826 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.661754 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.683634 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x6j2h"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.683888 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xk27r"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.683957 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pcvf9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.684159 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.684478 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.684543 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.684993 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.685146 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.685299 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.685407 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.688928 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ssl5n"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.689771 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-psbkg"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.690144 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4mctl"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.690527 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.690715 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.690933 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.691336 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.691644 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.691810 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.692100 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.692251 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.693318 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.693579 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.694237 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.694877 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.695527 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.695317 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.695381 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.695836 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.696260 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.696291 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.696305 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.696524 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.696887 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.696956 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.697071 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.697292 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.701349 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.701578 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.705277 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.706258 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.710607 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.711375 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.721723 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.721997 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.721892 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.722253 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.721919 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.721947 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.727818 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.728108 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.728342 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.728353 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.728533 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.728633 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tztw9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.728667 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ssl5n"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729072 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729113 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-audit-dir\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729133 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0936e64b-6cac-4a66-a450-549b46c62631-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729150 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729168 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-encryption-config\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729183 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0936e64b-6cac-4a66-a450-549b46c62631-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729197 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-client-ca\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729213 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88fa632-5c8d-4728-b71c-024c96f40f58-serving-cert\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729229 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-service-ca-bundle\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729247 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-dir\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729283 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729300 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729316 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-audit-policies\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729334 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729361 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729378 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-client-ca\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729395 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729413 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729428 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe1b0a77-d59c-410a-bcbd-a17d327958ae-audit-dir\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729446 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qf8d\" (UniqueName: \"kubernetes.io/projected/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-kube-api-access-8qf8d\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729481 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729497 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-config\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729513 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0936e64b-6cac-4a66-a450-549b46c62631-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729538 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729566 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729585 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4q8v\" (UniqueName: \"kubernetes.io/projected/fe1b0a77-d59c-410a-bcbd-a17d327958ae-kube-api-access-z4q8v\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729604 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729621 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0936e64b-6cac-4a66-a450-549b46c62631-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729643 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729660 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-serving-cert\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729675 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0936e64b-6cac-4a66-a450-549b46c62631-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729691 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cnb4\" (UniqueName: \"kubernetes.io/projected/e8913a76-5e7d-4d49-a9a4-388c052cf594-kube-api-access-4cnb4\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-audit\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729735 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ec2f2d-58e7-41bf-969d-b91b920c9faa-config\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729751 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-config\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729767 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729784 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729811 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-serving-cert\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729828 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fmg\" (UniqueName: \"kubernetes.io/projected/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-kube-api-access-86fmg\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729846 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-etcd-client\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729864 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-images\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729880 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-config\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729898 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-serving-cert\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729918 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62ec2f2d-58e7-41bf-969d-b91b920c9faa-auth-proxy-config\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729937 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdgzh\" (UniqueName: \"kubernetes.io/projected/62ec2f2d-58e7-41bf-969d-b91b920c9faa-kube-api-access-mdgzh\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729956 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58vz8\" (UniqueName: \"kubernetes.io/projected/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-kube-api-access-58vz8\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729979 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-encryption-config\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.729996 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-config\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730016 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fplmb\" (UniqueName: \"kubernetes.io/projected/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-kube-api-access-fplmb\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730034 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-policies\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730051 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-etcd-client\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730068 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt78s\" (UniqueName: \"kubernetes.io/projected/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-kube-api-access-kt78s\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730100 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730120 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730139 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730158 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-config\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730176 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-etcd-serving-ca\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730195 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730318 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe1b0a77-d59c-410a-bcbd-a17d327958ae-node-pullsecrets\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730591 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-image-import-ca\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730631 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb8tg\" (UniqueName: \"kubernetes.io/projected/b88fa632-5c8d-4728-b71c-024c96f40f58-kube-api-access-jb8tg\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.730721 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62ec2f2d-58e7-41bf-969d-b91b920c9faa-machine-approver-tls\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.740219 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.741201 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.742980 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.743025 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.743830 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rpmzg"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.744216 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.744670 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.753206 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.753845 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.754262 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.762766 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.770343 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.775282 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.776342 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.783254 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.783735 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.784964 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.789182 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.789439 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.789562 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.790045 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.790226 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hztbp"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.790541 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.790889 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.790982 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.791186 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.791340 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.791516 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.792549 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.797570 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.804169 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.805857 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p56rc"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.806196 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pcq7q"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.806550 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.806876 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.807055 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.807229 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.807420 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.809810 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.810180 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.810376 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.810451 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fwdgt"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.810569 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.810772 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.810966 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.811314 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.812645 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.816875 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.817533 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.818733 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.819124 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.823251 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.823433 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.824117 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.824764 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.826372 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.832812 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.824795 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.835606 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58vz8\" (UniqueName: \"kubernetes.io/projected/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-kube-api-access-58vz8\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.835706 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-encryption-config\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.835784 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4sm2\" (UniqueName: \"kubernetes.io/projected/33362ea2-94ea-4770-863b-ff417db50389-kube-api-access-r4sm2\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.835859 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-oauth-config\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.835933 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-config\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.836008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fplmb\" (UniqueName: \"kubernetes.io/projected/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-kube-api-access-fplmb\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.838045 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-config\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840233 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33362ea2-94ea-4770-863b-ff417db50389-serving-cert\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840416 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-policies\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840510 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-etcd-client\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840593 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt78s\" (UniqueName: \"kubernetes.io/projected/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-kube-api-access-kt78s\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840670 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840743 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840825 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-srv-cert\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840915 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841003 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-config\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841136 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82rsd\" (UniqueName: \"kubernetes.io/projected/8ba76435-5533-4104-8fb6-b5be5f354eb6-kube-api-access-82rsd\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841229 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-trusted-ca-bundle\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49x9\" (UniqueName: \"kubernetes.io/projected/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-kube-api-access-h49x9\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841381 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841451 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe1b0a77-d59c-410a-bcbd-a17d327958ae-node-pullsecrets\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841528 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-etcd-serving-ca\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841602 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb8tg\" (UniqueName: \"kubernetes.io/projected/b88fa632-5c8d-4728-b71c-024c96f40f58-kube-api-access-jb8tg\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841672 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-image-import-ca\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841755 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62ec2f2d-58e7-41bf-969d-b91b920c9faa-machine-approver-tls\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841864 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbzm7\" (UniqueName: \"kubernetes.io/projected/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-kube-api-access-kbzm7\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.841934 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-apiservice-cert\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842008 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-oauth-serving-cert\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842078 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c4102d73-26d6-461b-ac53-bfb4592a5e2b-signing-key\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842173 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842244 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-audit-dir\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842313 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0936e64b-6cac-4a66-a450-549b46c62631-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842394 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842466 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-encryption-config\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842536 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0936e64b-6cac-4a66-a450-549b46c62631-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/05428018-12ae-4524-b6f0-3abae46397dd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-spbcs\" (UID: \"05428018-12ae-4524-b6f0-3abae46397dd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842700 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-client-ca\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842785 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88fa632-5c8d-4728-b71c-024c96f40f58-serving-cert\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842854 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-service-ca-bundle\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.842925 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843018 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843118 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59a9283-102e-4e3b-addd-bda023aabec2-serving-cert\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843209 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj95r\" (UniqueName: \"kubernetes.io/projected/299baa07-011e-4629-808b-f86667b5cd82-kube-api-access-sj95r\") pod \"downloads-7954f5f757-psbkg\" (UID: \"299baa07-011e-4629-808b-f86667b5cd82\") " pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843287 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvgr\" (UniqueName: \"kubernetes.io/projected/c4102d73-26d6-461b-ac53-bfb4592a5e2b-kube-api-access-8lvgr\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843364 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-serving-cert\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843431 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59a9283-102e-4e3b-addd-bda023aabec2-config\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843508 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-dir\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843583 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843658 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843733 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-audit-policies\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843809 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843901 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.843978 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmtt\" (UniqueName: \"kubernetes.io/projected/05428018-12ae-4524-b6f0-3abae46397dd-kube-api-access-bhmtt\") pod \"cluster-samples-operator-665b6dd947-spbcs\" (UID: \"05428018-12ae-4524-b6f0-3abae46397dd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844070 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844211 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844267 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-audit-dir\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.840473 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844295 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844455 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe1b0a77-d59c-410a-bcbd-a17d327958ae-audit-dir\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844529 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qf8d\" (UniqueName: \"kubernetes.io/projected/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-kube-api-access-8qf8d\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844606 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-client-ca\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844698 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9szkf\" (UniqueName: \"kubernetes.io/projected/63191dc2-3a46-435d-9e6d-158fe21737e1-kube-api-access-9szkf\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844839 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844915 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-config\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.844985 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0936e64b-6cac-4a66-a450-549b46c62631-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845057 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-ready\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845152 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba76435-5533-4104-8fb6-b5be5f354eb6-serving-cert\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845233 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845304 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csf9v\" (UniqueName: \"kubernetes.io/projected/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-kube-api-access-csf9v\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845411 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845494 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4q8v\" (UniqueName: \"kubernetes.io/projected/fe1b0a77-d59c-410a-bcbd-a17d327958ae-kube-api-access-z4q8v\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845564 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmjc\" (UniqueName: \"kubernetes.io/projected/a59a9283-102e-4e3b-addd-bda023aabec2-kube-api-access-bpmjc\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845639 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845712 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0936e64b-6cac-4a66-a450-549b46c62631-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845786 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845857 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-serving-cert\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845922 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0936e64b-6cac-4a66-a450-549b46c62631-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.845995 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33362ea2-94ea-4770-863b-ff417db50389-config\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846076 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cnb4\" (UniqueName: \"kubernetes.io/projected/e8913a76-5e7d-4d49-a9a4-388c052cf594-kube-api-access-4cnb4\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846170 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-audit\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846246 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ec2f2d-58e7-41bf-969d-b91b920c9faa-config\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846323 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-service-ca\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846405 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ba76435-5533-4104-8fb6-b5be5f354eb6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846497 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-config\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846575 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846643 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59a9283-102e-4e3b-addd-bda023aabec2-trusted-ca\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846791 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846862 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-serving-cert\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846934 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fmg\" (UniqueName: \"kubernetes.io/projected/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-kube-api-access-86fmg\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847011 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-tmpfs\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847077 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-webhook-cert\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847188 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-images\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847269 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-etcd-client\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847339 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-config\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847406 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847482 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c4102d73-26d6-461b-ac53-bfb4592a5e2b-signing-cabundle\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847569 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-console-config\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847659 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-serving-cert\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847742 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62ec2f2d-58e7-41bf-969d-b91b920c9faa-auth-proxy-config\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847817 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdgzh\" (UniqueName: \"kubernetes.io/projected/62ec2f2d-58e7-41bf-969d-b91b920c9faa-kube-api-access-mdgzh\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847892 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgsl\" (UniqueName: \"kubernetes.io/projected/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-kube-api-access-phgsl\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.852916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0936e64b-6cac-4a66-a450-549b46c62631-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.853562 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.855470 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-policies\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.856599 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-dir\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.863592 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.863682 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0936e64b-6cac-4a66-a450-549b46c62631-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.864653 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.864748 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe1b0a77-d59c-410a-bcbd-a17d327958ae-audit-dir\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.865525 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-client-ca\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.867900 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n28nm"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.869364 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.869935 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.870375 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.870844 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.871004 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.874886 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.875513 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzbdn"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.914958 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-config\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.915243 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.915422 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-encryption-config\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.916928 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.918349 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.922482 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-audit-policies\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.923649 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.923737 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-client-ca\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.924749 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.926301 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.928163 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.928820 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.929145 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-etcd-client\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.929349 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-encryption-config\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.929667 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.929865 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.930173 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.931753 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.932068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88fa632-5c8d-4728-b71c-024c96f40f58-serving-cert\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.934822 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0936e64b-6cac-4a66-a450-549b46c62631-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.934939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.935226 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.941716 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.943288 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.846328 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.874522 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.934205 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.934460 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.953214 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-serving-cert\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.953741 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-audit\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.954256 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-config\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.954324 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b88fa632-5c8d-4728-b71c-024c96f40f58-service-ca-bundle\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.954886 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.955638 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62ec2f2d-58e7-41bf-969d-b91b920c9faa-config\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.955704 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-image-import-ca\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.956312 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0936e64b-6cac-4a66-a450-549b46c62631-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.957526 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe1b0a77-d59c-410a-bcbd-a17d327958ae-node-pullsecrets\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.959002 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-etcd-serving-ca\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.960653 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.961484 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-config\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.961559 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.961703 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.963390 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-images\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.968275 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zmfld"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.968488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.969034 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gmjwg"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.969149 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.969545 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.970435 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62ec2f2d-58e7-41bf-969d-b91b920c9faa-machine-approver-tls\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.847453 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1b0a77-d59c-410a-bcbd-a17d327958ae-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.970571 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.970599 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-config\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.971224 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.971301 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.971516 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62ec2f2d-58e7-41bf-969d-b91b920c9faa-auth-proxy-config\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.971960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-serving-cert\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.974828 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.975025 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.976857 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-serving-cert\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.977376 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4mctl"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.977853 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.978530 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe1b0a77-d59c-410a-bcbd-a17d327958ae-etcd-client\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.987508 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.988944 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-psbkg"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.990049 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rpmzg"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.990330 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvgr\" (UniqueName: \"kubernetes.io/projected/c4102d73-26d6-461b-ac53-bfb4592a5e2b-kube-api-access-8lvgr\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.990916 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-serving-cert\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.991433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59a9283-102e-4e3b-addd-bda023aabec2-config\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.991486 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5hj8\" (UniqueName: \"kubernetes.io/projected/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-kube-api-access-b5hj8\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.992066 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-ca\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.992171 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84768df9-4913-419b-b808-353e55de412b-proxy-tls\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.992286 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csf9v\" (UniqueName: \"kubernetes.io/projected/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-kube-api-access-csf9v\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.992328 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-serving-cert\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.993503 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59a9283-102e-4e3b-addd-bda023aabec2-config\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.994344 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14131bc3-3a7f-4152-84c2-9410e7fe638f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.994398 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-config\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.994447 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-client\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.994606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bd2f2c-ca0d-4cde-b5a6-657236634f37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.996566 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.996600 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fwdgt"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.996613 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dcc8081-77c7-47b8-a357-7bf604280bcf-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ck5wp\" (UID: \"4dcc8081-77c7-47b8-a357-7bf604280bcf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998328 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d82af0-7eea-4c15-af5d-e58d1a0b6721-secret-volume\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998399 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998499 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3ac81ca-3efe-4112-a8d0-9503bd1826b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m4qb9\" (UID: \"d3ac81ca-3efe-4112-a8d0-9503bd1826b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998683 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5t4\" (UniqueName: \"kubernetes.io/projected/11d82af0-7eea-4c15-af5d-e58d1a0b6721-kube-api-access-wj5t4\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998768 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-serving-cert\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998842 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6"] Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.998970 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgsl\" (UniqueName: \"kubernetes.io/projected/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-kube-api-access-phgsl\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999114 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8tl\" (UniqueName: \"kubernetes.io/projected/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-kube-api-access-zv8tl\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999203 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-oauth-config\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999322 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14131bc3-3a7f-4152-84c2-9410e7fe638f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999725 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-images\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999773 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64fwf\" (UniqueName: \"kubernetes.io/projected/72bd2f2c-ca0d-4cde-b5a6-657236634f37-kube-api-access-64fwf\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999803 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d82af0-7eea-4c15-af5d-e58d1a0b6721-config-volume\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999885 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33362ea2-94ea-4770-863b-ff417db50389-serving-cert\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:31 crc kubenswrapper[4624]: I0228 03:37:31.999983 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82rsd\" (UniqueName: \"kubernetes.io/projected/8ba76435-5533-4104-8fb6-b5be5f354eb6-kube-api-access-82rsd\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.000956 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-trusted-ca-bundle\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.001019 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.001113 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32058b9-ce5b-414e-9533-69a136730886-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fwdgt\" (UID: \"f32058b9-ce5b-414e-9533-69a136730886\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.001505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.001727 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/72bd2f2c-ca0d-4cde-b5a6-657236634f37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.001775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-apiservice-cert\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.002550 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p56rc"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.002732 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c4102d73-26d6-461b-ac53-bfb4592a5e2b-signing-key\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.003840 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.004046 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-trusted-ca-bundle\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.004142 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59a9283-102e-4e3b-addd-bda023aabec2-serving-cert\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.004171 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.004238 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5qjf\" (UniqueName: \"kubernetes.io/projected/84768df9-4913-419b-b808-353e55de412b-kube-api-access-d5qjf\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.004420 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.004789 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.005300 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmtt\" (UniqueName: \"kubernetes.io/projected/05428018-12ae-4524-b6f0-3abae46397dd-kube-api-access-bhmtt\") pod \"cluster-samples-operator-665b6dd947-spbcs\" (UID: \"05428018-12ae-4524-b6f0-3abae46397dd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.005367 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.005624 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72bd2f2c-ca0d-4cde-b5a6-657236634f37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.005741 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-oauth-config\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.005900 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9szkf\" (UniqueName: \"kubernetes.io/projected/63191dc2-3a46-435d-9e6d-158fe21737e1-kube-api-access-9szkf\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.007283 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.008126 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33362ea2-94ea-4770-863b-ff417db50389-serving-cert\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.009180 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7jk4g"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.007995 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5kn\" (UniqueName: \"kubernetes.io/projected/d3ac81ca-3efe-4112-a8d0-9503bd1826b7-kube-api-access-gq5kn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m4qb9\" (UID: \"d3ac81ca-3efe-4112-a8d0-9503bd1826b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.010604 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.011297 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h4pcb"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.011473 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba76435-5533-4104-8fb6-b5be5f354eb6-serving-cert\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.011672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.012669 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59a9283-102e-4e3b-addd-bda023aabec2-serving-cert\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.012846 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.012964 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmjc\" (UniqueName: \"kubernetes.io/projected/a59a9283-102e-4e3b-addd-bda023aabec2-kube-api-access-bpmjc\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013012 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33362ea2-94ea-4770-863b-ff417db50389-config\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013045 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhzf\" (UniqueName: \"kubernetes.io/projected/f32058b9-ce5b-414e-9533-69a136730886-kube-api-access-9vhzf\") pod \"multus-admission-controller-857f4d67dd-fwdgt\" (UID: \"f32058b9-ce5b-414e-9533-69a136730886\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013069 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013130 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-service-ca\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013383 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-console-config\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013411 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c4102d73-26d6-461b-ac53-bfb4592a5e2b-signing-cabundle\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013448 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4sm2\" (UniqueName: \"kubernetes.io/projected/33362ea2-94ea-4770-863b-ff417db50389-kube-api-access-r4sm2\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013472 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013523 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-srv-cert\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013560 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013600 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhfm8\" (UniqueName: \"kubernetes.io/projected/4dcc8081-77c7-47b8-a357-7bf604280bcf-kube-api-access-rhfm8\") pod \"package-server-manager-789f6589d5-ck5wp\" (UID: \"4dcc8081-77c7-47b8-a357-7bf604280bcf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49x9\" (UniqueName: \"kubernetes.io/projected/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-kube-api-access-h49x9\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.013917 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrbn\" (UniqueName: \"kubernetes.io/projected/14131bc3-3a7f-4152-84c2-9410e7fe638f-kube-api-access-txrbn\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.014144 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33362ea2-94ea-4770-863b-ff417db50389-config\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.014339 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c4102d73-26d6-461b-ac53-bfb4592a5e2b-signing-cabundle\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015463 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbzm7\" (UniqueName: \"kubernetes.io/projected/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-kube-api-access-kbzm7\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015664 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015683 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-oauth-serving-cert\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015753 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015759 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hcbj\" (UniqueName: \"kubernetes.io/projected/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-kube-api-access-9hcbj\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015782 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-console-config\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015817 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/05428018-12ae-4524-b6f0-3abae46397dd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-spbcs\" (UID: \"05428018-12ae-4524-b6f0-3abae46397dd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015841 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-metrics-tls\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015861 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.015978 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.016018 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-config\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.016847 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-oauth-serving-cert\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.016951 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.017141 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.019041 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n28nm"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.020295 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.021401 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.021589 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.021743 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/05428018-12ae-4524-b6f0-3abae46397dd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-spbcs\" (UID: \"05428018-12ae-4524-b6f0-3abae46397dd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.022865 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.024239 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.025297 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.025528 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-apiservice-cert\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.026455 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h4pcb"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.027499 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.029141 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sms8n"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.029817 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c4102d73-26d6-461b-ac53-bfb4592a5e2b-signing-key\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.031596 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.031655 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzbdn"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.031679 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7jk4g"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.033264 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.033825 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.036470 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zmfld"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.037770 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.038918 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sms8n"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.040484 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6"] Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.041600 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.050156 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-srv-cert\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.050267 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.060526 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.082051 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.100616 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfbc\" (UniqueName: \"kubernetes.io/projected/940da15d-4365-40e8-9f00-33fecfb1e6c6-kube-api-access-6lfbc\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117694 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117731 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32058b9-ce5b-414e-9533-69a136730886-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fwdgt\" (UID: \"f32058b9-ce5b-414e-9533-69a136730886\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117777 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/72bd2f2c-ca0d-4cde-b5a6-657236634f37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117842 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19844673-a712-45d6-8b90-ddd98c2f1e97-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117886 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkcl4\" (UniqueName: \"kubernetes.io/projected/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-kube-api-access-vkcl4\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117921 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-plugins-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.117962 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5qjf\" (UniqueName: \"kubernetes.io/projected/84768df9-4913-419b-b808-353e55de412b-kube-api-access-d5qjf\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.118005 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f55f77b6-8205-47a4-a420-f1cd7ffc7411-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.118076 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72bd2f2c-ca0d-4cde-b5a6-657236634f37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119517 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-csi-data-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-node-bootstrap-token\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119666 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5kn\" (UniqueName: \"kubernetes.io/projected/d3ac81ca-3efe-4112-a8d0-9503bd1826b7-kube-api-access-gq5kn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m4qb9\" (UID: \"d3ac81ca-3efe-4112-a8d0-9503bd1826b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119710 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-metrics-certs\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119781 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119838 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119867 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-profile-collector-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119944 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-registration-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.119977 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhzf\" (UniqueName: \"kubernetes.io/projected/f32058b9-ce5b-414e-9533-69a136730886-kube-api-access-9vhzf\") pod \"multus-admission-controller-857f4d67dd-fwdgt\" (UID: \"f32058b9-ce5b-414e-9533-69a136730886\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120006 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120042 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120119 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-service-ca\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120154 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968cp\" (UniqueName: \"kubernetes.io/projected/29229b27-aee6-4450-ade6-ed702af8d343-kube-api-access-968cp\") pod \"migrator-59844c95c7-n22ft\" (UID: \"29229b27-aee6-4450-ade6-ed702af8d343\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120257 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8s7\" (UniqueName: \"kubernetes.io/projected/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-kube-api-access-8z8s7\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120292 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120319 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19844673-a712-45d6-8b90-ddd98c2f1e97-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120348 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56m9s\" (UniqueName: \"kubernetes.io/projected/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-kube-api-access-56m9s\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120375 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-proxy-tls\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120400 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5j8t\" (UniqueName: \"kubernetes.io/projected/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-kube-api-access-s5j8t\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120463 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhfm8\" (UniqueName: \"kubernetes.io/projected/4dcc8081-77c7-47b8-a357-7bf604280bcf-kube-api-access-rhfm8\") pod \"package-server-manager-789f6589d5-ck5wp\" (UID: \"4dcc8081-77c7-47b8-a357-7bf604280bcf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120550 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txrbn\" (UniqueName: \"kubernetes.io/projected/14131bc3-3a7f-4152-84c2-9410e7fe638f-kube-api-access-txrbn\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120607 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hcbj\" (UniqueName: \"kubernetes.io/projected/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-kube-api-access-9hcbj\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120673 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-mountpoint-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120702 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-metrics-tls\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120723 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmzx\" (UniqueName: \"kubernetes.io/projected/58b397be-ff15-406b-96fb-0bc29f605c61-kube-api-access-fwmzx\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120750 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hktj\" (UniqueName: \"kubernetes.io/projected/3293163b-b75d-40f1-b004-8d938c413a4b-kube-api-access-2hktj\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120800 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-metrics-tls\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120844 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120865 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19844673-a712-45d6-8b90-ddd98c2f1e97-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120919 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-config\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120950 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj95r\" (UniqueName: \"kubernetes.io/projected/299baa07-011e-4629-808b-f86667b5cd82-kube-api-access-sj95r\") pod \"downloads-7954f5f757-psbkg\" (UID: \"299baa07-011e-4629-808b-f86667b5cd82\") " pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.120999 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-certs\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121030 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5hj8\" (UniqueName: \"kubernetes.io/projected/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-kube-api-access-b5hj8\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121050 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-stats-auth\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-cert\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121187 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121229 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-ca\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121258 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84768df9-4913-419b-b808-353e55de412b-proxy-tls\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-ready\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121342 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-serving-cert\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121381 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14131bc3-3a7f-4152-84c2-9410e7fe638f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121404 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-config\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-client\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121471 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pvn\" (UniqueName: \"kubernetes.io/projected/3912910a-bd9b-4b5d-a67a-c6929de727b9-kube-api-access-p4pvn\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121509 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-service-ca\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121530 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ba76435-5533-4104-8fb6-b5be5f354eb6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121553 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-srv-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121581 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59a9283-102e-4e3b-addd-bda023aabec2-trusted-ca\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121606 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bd2f2c-ca0d-4cde-b5a6-657236634f37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121724 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121809 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-tmpfs\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121839 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-webhook-cert\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121841 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dcc8081-77c7-47b8-a357-7bf604280bcf-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ck5wp\" (UID: \"4dcc8081-77c7-47b8-a357-7bf604280bcf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121903 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d82af0-7eea-4c15-af5d-e58d1a0b6721-secret-volume\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121937 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/940da15d-4365-40e8-9f00-33fecfb1e6c6-service-ca-bundle\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.121974 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f55f77b6-8205-47a4-a420-f1cd7ffc7411-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122002 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122031 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3ac81ca-3efe-4112-a8d0-9503bd1826b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m4qb9\" (UID: \"d3ac81ca-3efe-4112-a8d0-9503bd1826b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122061 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5t4\" (UniqueName: \"kubernetes.io/projected/11d82af0-7eea-4c15-af5d-e58d1a0b6721-kube-api-access-wj5t4\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122120 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8tl\" (UniqueName: \"kubernetes.io/projected/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-kube-api-access-zv8tl\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122323 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14131bc3-3a7f-4152-84c2-9410e7fe638f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122357 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-config-volume\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-images\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122451 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64fwf\" (UniqueName: \"kubernetes.io/projected/72bd2f2c-ca0d-4cde-b5a6-657236634f37-kube-api-access-64fwf\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122491 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d82af0-7eea-4c15-af5d-e58d1a0b6721-config-volume\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122527 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-default-certificate\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55f77b6-8205-47a4-a420-f1cd7ffc7411-config\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.122573 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-socket-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.124034 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-ready\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.125726 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-service-ca\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.126127 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ba76435-5533-4104-8fb6-b5be5f354eb6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.126552 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-tmpfs\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.127228 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.130454 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-ca\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.131167 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a59a9283-102e-4e3b-addd-bda023aabec2-trusted-ca\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.134485 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.136309 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.137689 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d82af0-7eea-4c15-af5d-e58d1a0b6721-secret-volume\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.138942 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-webhook-cert\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.143738 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.160876 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.165519 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba76435-5533-4104-8fb6-b5be5f354eb6-serving-cert\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.182857 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.201417 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.222255 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.223598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-mountpoint-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.223720 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-metrics-tls\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.223738 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-mountpoint-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.223826 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmzx\" (UniqueName: \"kubernetes.io/projected/58b397be-ff15-406b-96fb-0bc29f605c61-kube-api-access-fwmzx\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224025 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hktj\" (UniqueName: \"kubernetes.io/projected/3293163b-b75d-40f1-b004-8d938c413a4b-kube-api-access-2hktj\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224132 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19844673-a712-45d6-8b90-ddd98c2f1e97-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224234 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-certs\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224362 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-stats-auth\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224409 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-cert\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224449 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224636 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pvn\" (UniqueName: \"kubernetes.io/projected/3912910a-bd9b-4b5d-a67a-c6929de727b9-kube-api-access-p4pvn\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224700 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-srv-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224797 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/940da15d-4365-40e8-9f00-33fecfb1e6c6-service-ca-bundle\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.224909 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f55f77b6-8205-47a4-a420-f1cd7ffc7411-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225174 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-config-volume\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225282 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-default-certificate\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55f77b6-8205-47a4-a420-f1cd7ffc7411-config\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225394 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-socket-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225508 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfbc\" (UniqueName: \"kubernetes.io/projected/940da15d-4365-40e8-9f00-33fecfb1e6c6-kube-api-access-6lfbc\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225602 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19844673-a712-45d6-8b90-ddd98c2f1e97-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225651 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkcl4\" (UniqueName: \"kubernetes.io/projected/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-kube-api-access-vkcl4\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225696 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-plugins-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225750 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f55f77b6-8205-47a4-a420-f1cd7ffc7411-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225758 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-socket-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225789 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-csi-data-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225827 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-node-bootstrap-token\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.225887 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-plugins-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226006 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-metrics-certs\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226041 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-csi-data-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226159 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226198 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-profile-collector-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226234 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-registration-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226276 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226307 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-968cp\" (UniqueName: \"kubernetes.io/projected/29229b27-aee6-4450-ade6-ed702af8d343-kube-api-access-968cp\") pod \"migrator-59844c95c7-n22ft\" (UID: \"29229b27-aee6-4450-ade6-ed702af8d343\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226346 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3293163b-b75d-40f1-b004-8d938c413a4b-registration-dir\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226376 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8s7\" (UniqueName: \"kubernetes.io/projected/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-kube-api-access-8z8s7\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226422 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19844673-a712-45d6-8b90-ddd98c2f1e97-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226449 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56m9s\" (UniqueName: \"kubernetes.io/projected/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-kube-api-access-56m9s\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226479 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-proxy-tls\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.226513 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5j8t\" (UniqueName: \"kubernetes.io/projected/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-kube-api-access-s5j8t\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.228013 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.229474 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-profile-collector-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.230333 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14131bc3-3a7f-4152-84c2-9410e7fe638f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.241031 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.243713 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14131bc3-3a7f-4152-84c2-9410e7fe638f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.261356 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.282075 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.301537 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.308824 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19844673-a712-45d6-8b90-ddd98c2f1e97-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.322005 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.327985 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19844673-a712-45d6-8b90-ddd98c2f1e97-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.341654 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.361202 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.368379 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-serving-cert\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.380788 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.389200 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-client\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.400899 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.408339 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-etcd-service-ca\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.422471 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.439974 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.460036 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.469363 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-config\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.480517 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.492953 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-metrics-certs\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.501417 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.510303 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-default-certificate\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.520595 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.531661 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/940da15d-4365-40e8-9f00-33fecfb1e6c6-stats-auth\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.540418 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.546701 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/940da15d-4365-40e8-9f00-33fecfb1e6c6-service-ca-bundle\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.560725 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.582381 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.600421 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.614070 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32058b9-ce5b-414e-9533-69a136730886-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fwdgt\" (UID: \"f32058b9-ce5b-414e-9533-69a136730886\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.621518 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.641880 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.658477 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3ac81ca-3efe-4112-a8d0-9503bd1826b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-m4qb9\" (UID: \"d3ac81ca-3efe-4112-a8d0-9503bd1826b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.662271 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.683394 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.702331 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.706679 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-config\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.721641 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.730266 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.742035 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.761346 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.781928 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.790955 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f55f77b6-8205-47a4-a420-f1cd7ffc7411-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.801577 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.821377 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.826832 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55f77b6-8205-47a4-a420-f1cd7ffc7411-config\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.838832 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.838987 4624 request.go:700] Waited for 1.005770056s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Feb 28 03:37:32 crc kubenswrapper[4624]: E0228 03:37:32.839183 4624 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:32 crc kubenswrapper[4624]: E0228 03:37:32.839319 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs podName:6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.839276524 +0000 UTC m=+115.503315993 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs") pod "network-metrics-daemon-85p9r" (UID: "6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.857126 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.862003 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.865493 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72bd2f2c-ca0d-4cde-b5a6-657236634f37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.885786 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.903582 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.924446 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/72bd2f2c-ca0d-4cde-b5a6-657236634f37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.931630 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.942918 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.956816 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.972428 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.980317 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:32 crc kubenswrapper[4624]: I0228 03:37:32.981369 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.019235 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58vz8\" (UniqueName: \"kubernetes.io/projected/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-kube-api-access-58vz8\") pod \"controller-manager-879f6c89f-xk27r\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.042830 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fplmb\" (UniqueName: \"kubernetes.io/projected/d7620f1c-2f80-43b3-ac28-8b3298c4ded6-kube-api-access-fplmb\") pod \"apiserver-7bbb656c7d-w4zpx\" (UID: \"d7620f1c-2f80-43b3-ac28-8b3298c4ded6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.062260 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.074253 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dcc8081-77c7-47b8-a357-7bf604280bcf-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ck5wp\" (UID: \"4dcc8081-77c7-47b8-a357-7bf604280bcf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.086130 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.086235 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.086662 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.087189 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.094931 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4q8v\" (UniqueName: \"kubernetes.io/projected/fe1b0a77-d59c-410a-bcbd-a17d327958ae-kube-api-access-z4q8v\") pod \"apiserver-76f77b778f-tztw9\" (UID: \"fe1b0a77-d59c-410a-bcbd-a17d327958ae\") " pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.119483 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qf8d\" (UniqueName: \"kubernetes.io/projected/b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d-kube-api-access-8qf8d\") pod \"machine-api-operator-5694c8668f-nhzzm\" (UID: \"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.120507 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.123602 4624 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.123685 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84768df9-4913-419b-b808-353e55de412b-proxy-tls podName:84768df9-4913-419b-b808-353e55de412b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.623658026 +0000 UTC m=+108.287697345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/84768df9-4913-419b-b808-353e55de412b-proxy-tls") pod "machine-config-operator-74547568cd-h69wx" (UID: "84768df9-4913-419b-b808-353e55de412b") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.124229 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d82af0-7eea-4c15-af5d-e58d1a0b6721-config-volume\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.128376 4624 secret.go:188] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.128524 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-metrics-tls podName:fbfd04f1-cedf-4a3d-b0a5-0a2130f02105 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.628502937 +0000 UTC m=+108.292542246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-metrics-tls") pod "dns-operator-744455d44c-n28nm" (UID: "fbfd04f1-cedf-4a3d-b0a5-0a2130f02105") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.132333 4624 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.132413 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-images podName:84768df9-4913-419b-b808-353e55de412b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.632394302 +0000 UTC m=+108.296433621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-images") pod "machine-config-operator-74547568cd-h69wx" (UID: "84768df9-4913-419b-b808-353e55de412b") : failed to sync configmap cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.135418 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.145162 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.153971 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.160172 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.181315 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.200669 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.201653 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.230581 4624 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.230638 4624 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.230681 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-certs podName:b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.730657017 +0000 UTC m=+108.394696336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-certs") pod "machine-config-server-gmjwg" (UID: "b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.230823 4624 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.230943 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-node-bootstrap-token podName:b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.730909924 +0000 UTC m=+108.394949233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-node-bootstrap-token") pod "machine-config-server-gmjwg" (UID: "b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.230996 4624 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231027 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-proxy-tls podName:c2e6cc2d-b396-4eb2-8775-5836cc6ef10c nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.731018907 +0000 UTC m=+108.395058216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-proxy-tls") pod "machine-config-controller-84d6567774-j4fmq" (UID: "c2e6cc2d-b396-4eb2-8775-5836cc6ef10c") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231042 4624 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231063 4624 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231069 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-srv-cert podName:58b397be-ff15-406b-96fb-0bc29f605c61 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.731060808 +0000 UTC m=+108.395100117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-srv-cert") pod "catalog-operator-68c6474976-rcgs6" (UID: "58b397be-ff15-406b-96fb-0bc29f605c61") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231136 4624 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231164 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics podName:3912910a-bd9b-4b5d-a67a-c6929de727b9 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.73115328 +0000 UTC m=+108.395192609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics") pod "marketplace-operator-79b997595-zmfld" (UID: "3912910a-bd9b-4b5d-a67a-c6929de727b9") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231186 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca podName:3912910a-bd9b-4b5d-a67a-c6929de727b9 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.731179341 +0000 UTC m=+108.395218650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca") pod "marketplace-operator-79b997595-zmfld" (UID: "3912910a-bd9b-4b5d-a67a-c6929de727b9") : failed to sync configmap cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231227 4624 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.231262 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-cert podName:e2acdca9-92fb-4bed-a5f4-cdffb5480e54 nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.731254403 +0000 UTC m=+108.395293722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-cert") pod "ingress-canary-h4pcb" (UID: "e2acdca9-92fb-4bed-a5f4-cdffb5480e54") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.232210 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-metrics-tls podName:22b6c28f-48dd-4e0f-826e-db6301e5dfcb nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.732178158 +0000 UTC m=+108.396217467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-metrics-tls") pod "dns-default-7jk4g" (UID: "22b6c28f-48dd-4e0f-826e-db6301e5dfcb") : failed to sync secret cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.235437 4624 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: E0228 03:37:33.235493 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-config-volume podName:22b6c28f-48dd-4e0f-826e-db6301e5dfcb nodeName:}" failed. No retries permitted until 2026-02-28 03:37:33.735480387 +0000 UTC m=+108.399519696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-config-volume") pod "dns-default-7jk4g" (UID: "22b6c28f-48dd-4e0f-826e-db6301e5dfcb") : failed to sync configmap cache: timed out waiting for the condition Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.235495 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.246072 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.247408 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.262810 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.281244 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.317433 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt78s\" (UniqueName: \"kubernetes.io/projected/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-kube-api-access-kt78s\") pod \"route-controller-manager-6576b87f9c-z4285\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.321481 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.341190 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.350805 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nhzzm"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.362885 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.381545 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.392961 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tztw9"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.400291 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 03:37:33 crc kubenswrapper[4624]: W0228 03:37:33.405723 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe1b0a77_d59c_410a_bcbd_a17d327958ae.slice/crio-095ff3e62b994e981a4c8c2d61e9f81e26c061a3be653d71c32a77e638273287 WatchSource:0}: Error finding container 095ff3e62b994e981a4c8c2d61e9f81e26c061a3be653d71c32a77e638273287: Status 404 returned error can't find the container with id 095ff3e62b994e981a4c8c2d61e9f81e26c061a3be653d71c32a77e638273287 Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.415011 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.421674 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.456621 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cnb4\" (UniqueName: \"kubernetes.io/projected/e8913a76-5e7d-4d49-a9a4-388c052cf594-kube-api-access-4cnb4\") pod \"oauth-openshift-558db77b4-pcvf9\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.464389 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.478104 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xk27r"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.482320 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 03:37:33 crc kubenswrapper[4624]: W0228 03:37:33.497808 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f WatchSource:0}: Error finding container af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f: Status 404 returned error can't find the container with id af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.518385 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0936e64b-6cac-4a66-a450-549b46c62631-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l4l6f\" (UID: \"0936e64b-6cac-4a66-a450-549b46c62631\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.534559 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fmg\" (UniqueName: \"kubernetes.io/projected/4ff9da47-c5fc-413b-ba6d-d3c93594ea14-kube-api-access-86fmg\") pod \"openshift-apiserver-operator-796bbdcf4f-7wtnq\" (UID: \"4ff9da47-c5fc-413b-ba6d-d3c93594ea14\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.554864 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb8tg\" (UniqueName: \"kubernetes.io/projected/b88fa632-5c8d-4728-b71c-024c96f40f58-kube-api-access-jb8tg\") pod \"authentication-operator-69f744f599-x6j2h\" (UID: \"b88fa632-5c8d-4728-b71c-024c96f40f58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.561963 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.582283 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.596699 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.596795 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.601747 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.614523 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.621421 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.656420 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" event={"ID":"0936e64b-6cac-4a66-a450-549b46c62631","Type":"ContainerStarted","Data":"083299befacd5148bf23e85fb9775006da35aa4d21158f57b307cd07f9c15cd1"} Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.657426 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.665296 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.665613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" event={"ID":"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d","Type":"ContainerStarted","Data":"1fca9dd1d254c380256586e0a79a2bdbc3217f027a5c530ffa689e66e1336978"} Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.665669 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" event={"ID":"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d","Type":"ContainerStarted","Data":"bb010db32779c861ba995155627f78a62614785129dfeafd9dfb6ccbde4ce817"} Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.672034 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-images\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.672658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" event={"ID":"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c","Type":"ContainerStarted","Data":"af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f"} Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.675772 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.676273 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-metrics-tls\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.676468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/84768df9-4913-419b-b808-353e55de412b-images\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.676475 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84768df9-4913-419b-b808-353e55de412b-proxy-tls\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.680851 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84768df9-4913-419b-b808-353e55de412b-proxy-tls\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.681928 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" event={"ID":"fe1b0a77-d59c-410a-bcbd-a17d327958ae","Type":"ContainerStarted","Data":"095ff3e62b994e981a4c8c2d61e9f81e26c061a3be653d71c32a77e638273287"} Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.683275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-metrics-tls\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.683775 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.710716 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.720267 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.732920 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.768474 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdgzh\" (UniqueName: \"kubernetes.io/projected/62ec2f2d-58e7-41bf-969d-b91b920c9faa-kube-api-access-mdgzh\") pod \"machine-approver-56656f9798-8glmv\" (UID: \"62ec2f2d-58e7-41bf-969d-b91b920c9faa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:33 crc kubenswrapper[4624]: W0228 03:37:33.769272 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7620f1c_2f80_43b3_ac28_8b3298c4ded6.slice/crio-67608422e03b7119336c41b45b2e37b7fd24e488ea0248703cde788df9e0355b WatchSource:0}: Error finding container 67608422e03b7119336c41b45b2e37b7fd24e488ea0248703cde788df9e0355b: Status 404 returned error can't find the container with id 67608422e03b7119336c41b45b2e37b7fd24e488ea0248703cde788df9e0355b Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777399 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-node-bootstrap-token\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777548 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777603 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777689 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-proxy-tls\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-metrics-tls\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777846 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-certs\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777883 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-cert\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777944 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-srv-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.777993 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-config-volume\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.780580 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.782763 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.783621 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58b397be-ff15-406b-96fb-0bc29f605c61-srv-cert\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.784535 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-proxy-tls\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.784873 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.788005 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-node-bootstrap-token\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.795323 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvgr\" (UniqueName: \"kubernetes.io/projected/c4102d73-26d6-461b-ac53-bfb4592a5e2b-kube-api-access-8lvgr\") pod \"service-ca-9c57cc56f-rpmzg\" (UID: \"c4102d73-26d6-461b-ac53-bfb4592a5e2b\") " pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.801273 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-certs\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.806688 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csf9v\" (UniqueName: \"kubernetes.io/projected/e7bc367c-d6e3-4b04-a16f-17ed7b69a796-kube-api-access-csf9v\") pod \"openshift-controller-manager-operator-756b6f6bc6-q469k\" (UID: \"e7bc367c-d6e3-4b04-a16f-17ed7b69a796\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.821561 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgsl\" (UniqueName: \"kubernetes.io/projected/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-kube-api-access-phgsl\") pod \"cni-sysctl-allowlist-ds-hztbp\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.831791 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.836672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82rsd\" (UniqueName: \"kubernetes.io/projected/8ba76435-5533-4104-8fb6-b5be5f354eb6-kube-api-access-82rsd\") pod \"openshift-config-operator-7777fb866f-fl4dx\" (UID: \"8ba76435-5533-4104-8fb6-b5be5f354eb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.840216 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.840587 4624 request.go:700] Waited for 1.834990104s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.861879 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x6j2h"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.862020 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.870786 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.882268 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9szkf\" (UniqueName: \"kubernetes.io/projected/63191dc2-3a46-435d-9e6d-158fe21737e1-kube-api-access-9szkf\") pod \"console-f9d7485db-ssl5n\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.888231 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmtt\" (UniqueName: \"kubernetes.io/projected/05428018-12ae-4524-b6f0-3abae46397dd-kube-api-access-bhmtt\") pod \"cluster-samples-operator-665b6dd947-spbcs\" (UID: \"05428018-12ae-4524-b6f0-3abae46397dd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.895239 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.900365 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.915641 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-metrics-tls\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.920598 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.929931 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-config-volume\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.944205 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.965838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmjc\" (UniqueName: \"kubernetes.io/projected/a59a9283-102e-4e3b-addd-bda023aabec2-kube-api-access-bpmjc\") pod \"console-operator-58897d9998-4mctl\" (UID: \"a59a9283-102e-4e3b-addd-bda023aabec2\") " pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.976510 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pcvf9"] Feb 28 03:37:33 crc kubenswrapper[4624]: I0228 03:37:33.991703 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4sm2\" (UniqueName: \"kubernetes.io/projected/33362ea2-94ea-4770-863b-ff417db50389-kube-api-access-r4sm2\") pod \"service-ca-operator-777779d784-g5cq6\" (UID: \"33362ea2-94ea-4770-863b-ff417db50389\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.001830 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.016850 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbzm7\" (UniqueName: \"kubernetes.io/projected/407a407f-0b60-4ea0-8737-ee20b3cf6ce2-kube-api-access-kbzm7\") pod \"olm-operator-6b444d44fb-x9vc9\" (UID: \"407a407f-0b60-4ea0-8737-ee20b3cf6ce2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:34 crc kubenswrapper[4624]: W0228 03:37:34.016993 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8913a76_5e7d_4d49_a9a4_388c052cf594.slice/crio-556fd9337129fdf7ae397c61bcea7e5a7dd52417c1ae7a81c0e987353e859c9e WatchSource:0}: Error finding container 556fd9337129fdf7ae397c61bcea7e5a7dd52417c1ae7a81c0e987353e859c9e: Status 404 returned error can't find the container with id 556fd9337129fdf7ae397c61bcea7e5a7dd52417c1ae7a81c0e987353e859c9e Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.021747 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.037308 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.041800 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-cert\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.041900 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 03:37:34 crc kubenswrapper[4624]: W0228 03:37:34.052285 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ec2f2d_58e7_41bf_969d_b91b920c9faa.slice/crio-390fdbbca20e21e74c3f03d397af3880d500b56ee802dc9e7a897b23e8a014cf WatchSource:0}: Error finding container 390fdbbca20e21e74c3f03d397af3880d500b56ee802dc9e7a897b23e8a014cf: Status 404 returned error can't find the container with id 390fdbbca20e21e74c3f03d397af3880d500b56ee802dc9e7a897b23e8a014cf Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.062987 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.077455 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.098715 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49x9\" (UniqueName: \"kubernetes.io/projected/7bbee3e3-9ea7-4f61-a206-fd7f6058f208-kube-api-access-h49x9\") pod \"packageserver-d55dfcdfc-2j628\" (UID: \"7bbee3e3-9ea7-4f61-a206-fd7f6058f208\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.102163 4624 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.114202 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.123352 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.123832 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.140752 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.154740 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.165974 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.194874 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.228618 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/556785c7-5f4e-4e1f-b7b1-13b5c0653ee8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9rk94\" (UID: \"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.238598 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.240227 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhfm8\" (UniqueName: \"kubernetes.io/projected/4dcc8081-77c7-47b8-a357-7bf604280bcf-kube-api-access-rhfm8\") pod \"package-server-manager-789f6589d5-ck5wp\" (UID: \"4dcc8081-77c7-47b8-a357-7bf604280bcf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.258454 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txrbn\" (UniqueName: \"kubernetes.io/projected/14131bc3-3a7f-4152-84c2-9410e7fe638f-kube-api-access-txrbn\") pod \"kube-storage-version-migrator-operator-b67b599dd-bsm95\" (UID: \"14131bc3-3a7f-4152-84c2-9410e7fe638f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.278714 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.283528 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hcbj\" (UniqueName: \"kubernetes.io/projected/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-kube-api-access-9hcbj\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.303909 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72bd2f2c-ca0d-4cde-b5a6-657236634f37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.327689 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhzf\" (UniqueName: \"kubernetes.io/projected/f32058b9-ce5b-414e-9533-69a136730886-kube-api-access-9vhzf\") pod \"multus-admission-controller-857f4d67dd-fwdgt\" (UID: \"f32058b9-ce5b-414e-9533-69a136730886\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.366175 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj95r\" (UniqueName: \"kubernetes.io/projected/299baa07-011e-4629-808b-f86667b5cd82-kube-api-access-sj95r\") pod \"downloads-7954f5f757-psbkg\" (UID: \"299baa07-011e-4629-808b-f86667b5cd82\") " pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.381201 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5qjf\" (UniqueName: \"kubernetes.io/projected/84768df9-4913-419b-b808-353e55de412b-kube-api-access-d5qjf\") pod \"machine-config-operator-74547568cd-h69wx\" (UID: \"84768df9-4913-419b-b808-353e55de412b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.381681 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzkh5\" (UID: \"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.394994 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8tl\" (UniqueName: \"kubernetes.io/projected/fbfd04f1-cedf-4a3d-b0a5-0a2130f02105-kube-api-access-zv8tl\") pod \"dns-operator-744455d44c-n28nm\" (UID: \"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105\") " pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.404749 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.423691 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5hj8\" (UniqueName: \"kubernetes.io/projected/0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20-kube-api-access-b5hj8\") pod \"etcd-operator-b45778765-p56rc\" (UID: \"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.449387 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5kn\" (UniqueName: \"kubernetes.io/projected/d3ac81ca-3efe-4112-a8d0-9503bd1826b7-kube-api-access-gq5kn\") pod \"control-plane-machine-set-operator-78cbb6b69f-m4qb9\" (UID: \"d3ac81ca-3efe-4112-a8d0-9503bd1826b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.456511 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ssl5n"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.459192 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-rpmzg"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.460176 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.468482 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5t4\" (UniqueName: \"kubernetes.io/projected/11d82af0-7eea-4c15-af5d-e58d1a0b6721-kube-api-access-wj5t4\") pod \"collect-profiles-29537490-bm2gd\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.475143 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.478314 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.488344 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64fwf\" (UniqueName: \"kubernetes.io/projected/72bd2f2c-ca0d-4cde-b5a6-657236634f37-kube-api-access-64fwf\") pod \"cluster-image-registry-operator-dc59b4c8b-hr44k\" (UID: \"72bd2f2c-ca0d-4cde-b5a6-657236634f37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.495309 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.495747 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hktj\" (UniqueName: \"kubernetes.io/projected/3293163b-b75d-40f1-b004-8d938c413a4b-kube-api-access-2hktj\") pod \"csi-hostpathplugin-sms8n\" (UID: \"3293163b-b75d-40f1-b004-8d938c413a4b\") " pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.518896 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.527186 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.546382 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmzx\" (UniqueName: \"kubernetes.io/projected/58b397be-ff15-406b-96fb-0bc29f605c61-kube-api-access-fwmzx\") pod \"catalog-operator-68c6474976-rcgs6\" (UID: \"58b397be-ff15-406b-96fb-0bc29f605c61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:34 crc kubenswrapper[4624]: W0228 03:37:34.548599 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4102d73_26d6_461b_ac53_bfb4592a5e2b.slice/crio-6014432ac11ba6c04618d841a93620ca247a5eecdd9e4eee6f46047f4d469c13 WatchSource:0}: Error finding container 6014432ac11ba6c04618d841a93620ca247a5eecdd9e4eee6f46047f4d469c13: Status 404 returned error can't find the container with id 6014432ac11ba6c04618d841a93620ca247a5eecdd9e4eee6f46047f4d469c13 Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.551223 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.555885 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.560494 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pvn\" (UniqueName: \"kubernetes.io/projected/3912910a-bd9b-4b5d-a67a-c6929de727b9-kube-api-access-p4pvn\") pod \"marketplace-operator-79b997595-zmfld\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.563162 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.565864 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f55f77b6-8205-47a4-a420-f1cd7ffc7411-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cm5l9\" (UID: \"f55f77b6-8205-47a4-a420-f1cd7ffc7411\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.587377 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkcl4\" (UniqueName: \"kubernetes.io/projected/22b6c28f-48dd-4e0f-826e-db6301e5dfcb-kube-api-access-vkcl4\") pod \"dns-default-7jk4g\" (UID: \"22b6c28f-48dd-4e0f-826e-db6301e5dfcb\") " pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:34 crc kubenswrapper[4624]: W0228 03:37:34.587551 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7bc367c_d6e3_4b04_a16f_17ed7b69a796.slice/crio-4e31f84517741b5cef41a035ca602ab85b0701023c612659c766541003be211d WatchSource:0}: Error finding container 4e31f84517741b5cef41a035ca602ab85b0701023c612659c766541003be211d: Status 404 returned error can't find the container with id 4e31f84517741b5cef41a035ca602ab85b0701023c612659c766541003be211d Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.589132 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.597073 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.600686 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfbc\" (UniqueName: \"kubernetes.io/projected/940da15d-4365-40e8-9f00-33fecfb1e6c6-kube-api-access-6lfbc\") pod \"router-default-5444994796-pcq7q\" (UID: \"940da15d-4365-40e8-9f00-33fecfb1e6c6\") " pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.606700 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.632468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19844673-a712-45d6-8b90-ddd98c2f1e97-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gb7fg\" (UID: \"19844673-a712-45d6-8b90-ddd98c2f1e97\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.641594 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-968cp\" (UniqueName: \"kubernetes.io/projected/29229b27-aee6-4450-ade6-ed702af8d343-kube-api-access-968cp\") pod \"migrator-59844c95c7-n22ft\" (UID: \"29229b27-aee6-4450-ade6-ed702af8d343\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.649720 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.656726 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.663693 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.668125 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8s7\" (UniqueName: \"kubernetes.io/projected/e2acdca9-92fb-4bed-a5f4-cdffb5480e54-kube-api-access-8z8s7\") pod \"ingress-canary-h4pcb\" (UID: \"e2acdca9-92fb-4bed-a5f4-cdffb5480e54\") " pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.671326 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h4pcb" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.682611 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56m9s\" (UniqueName: \"kubernetes.io/projected/c2e6cc2d-b396-4eb2-8775-5836cc6ef10c-kube-api-access-56m9s\") pod \"machine-config-controller-84d6567774-j4fmq\" (UID: \"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.689178 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.705005 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" event={"ID":"8ba76435-5533-4104-8fb6-b5be5f354eb6","Type":"ContainerStarted","Data":"fe1c79f5ab7021761c7d3712ac52423affbe97cf2859b93dea93a1334df32840"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.711539 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5j8t\" (UniqueName: \"kubernetes.io/projected/b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4-kube-api-access-s5j8t\") pod \"machine-config-server-gmjwg\" (UID: \"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4\") " pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.720936 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" event={"ID":"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c","Type":"ContainerStarted","Data":"954af90411c39e7a6714e8e488b9ede03c6f56d35dab831b3cce882e17fa92f9"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.721626 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.733555 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.733827 4624 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xk27r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.733862 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.741491 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4mctl"] Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.748938 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.762433 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.763905 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" event={"ID":"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba","Type":"ContainerStarted","Data":"b5da23d9fc78ece75b213428a7cb25e40cdd4a8a37197da0ab006f608aa307ff"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.763960 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" event={"ID":"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba","Type":"ContainerStarted","Data":"bdbbeff4665c939188e772e896c1c252f17e9f650973ac5629fc6c9f3eca5592"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.764308 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.766032 4624 generic.go:334] "Generic (PLEG): container finished" podID="fe1b0a77-d59c-410a-bcbd-a17d327958ae" containerID="8b995a4181ea78fdccfdc78d4a269e617c66ea1ec4d1ae657365b79f39da7d31" exitCode=0 Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.766553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" event={"ID":"fe1b0a77-d59c-410a-bcbd-a17d327958ae","Type":"ContainerDied","Data":"8b995a4181ea78fdccfdc78d4a269e617c66ea1ec4d1ae657365b79f39da7d31"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.772996 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" event={"ID":"c4102d73-26d6-461b-ac53-bfb4592a5e2b","Type":"ContainerStarted","Data":"6014432ac11ba6c04618d841a93620ca247a5eecdd9e4eee6f46047f4d469c13"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.782273 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.783838 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" event={"ID":"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6","Type":"ContainerStarted","Data":"c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.783890 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" event={"ID":"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6","Type":"ContainerStarted","Data":"6c2589090ac8cfd1d8dbaabf6275e329f33e351f9870eafb9dd733d0ade63e2a"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.784538 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.785893 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.802421 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.802836 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.803972 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" event={"ID":"62ec2f2d-58e7-41bf-969d-b91b920c9faa","Type":"ContainerStarted","Data":"390fdbbca20e21e74c3f03d397af3880d500b56ee802dc9e7a897b23e8a014cf"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.811623 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" event={"ID":"b88fa632-5c8d-4728-b71c-024c96f40f58","Type":"ContainerStarted","Data":"4447f126c6d9cd659ab3270727eac676c4b876d132eb9fd676c4ebdf2af1dc16"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.811674 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" event={"ID":"b88fa632-5c8d-4728-b71c-024c96f40f58","Type":"ContainerStarted","Data":"d9726e08d3be5935b5247b95d605e4e9bf16601422b6cd78d7e16eb83d082ece"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.822668 4624 generic.go:334] "Generic (PLEG): container finished" podID="d7620f1c-2f80-43b3-ac28-8b3298c4ded6" containerID="fa8a3457372f679862af270198ee1ada134f53f4f5b1991604d950bd3705144c" exitCode=0 Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.822779 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" event={"ID":"d7620f1c-2f80-43b3-ac28-8b3298c4ded6","Type":"ContainerDied","Data":"fa8a3457372f679862af270198ee1ada134f53f4f5b1991604d950bd3705144c"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.822808 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" event={"ID":"d7620f1c-2f80-43b3-ac28-8b3298c4ded6","Type":"ContainerStarted","Data":"67608422e03b7119336c41b45b2e37b7fd24e488ea0248703cde788df9e0355b"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.834020 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.834502 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" event={"ID":"0936e64b-6cac-4a66-a450-549b46c62631","Type":"ContainerStarted","Data":"57c4e5afc9d9e54ee1e8959788eea0c51c2652723eee61f23a8ae9eef37934ee"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.836922 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" event={"ID":"b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d","Type":"ContainerStarted","Data":"358ca611142e8d993e4e63a043565d9baf86a05074aae90cbd95eda8f45306a2"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.838557 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" event={"ID":"4ff9da47-c5fc-413b-ba6d-d3c93594ea14","Type":"ContainerStarted","Data":"a9d2954b192a37bb1ee86022c33e44aed923a7d10e098f96e745504eeccbb647"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.848830 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.870646 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" event={"ID":"e7bc367c-d6e3-4b04-a16f-17ed7b69a796","Type":"ContainerStarted","Data":"4e31f84517741b5cef41a035ca602ab85b0701023c612659c766541003be211d"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.905747 4624 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908470 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phc2t\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-kube-api-access-phc2t\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908569 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-bound-sa-token\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908628 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823705e-af27-4b37-98f8-f73d31f69e02-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908683 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-trusted-ca\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908704 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-registry-certificates\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908731 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-registry-tls\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908799 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.908831 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823705e-af27-4b37-98f8-f73d31f69e02-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:34 crc kubenswrapper[4624]: E0228 03:37:34.913701 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.413675189 +0000 UTC m=+110.077714718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.915580 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.926488 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ssl5n" event={"ID":"63191dc2-3a46-435d-9e6d-158fe21737e1","Type":"ContainerStarted","Data":"3dc5873b3866e8973931e9419e6a3457e23b95d1931694c82efc70e9efa93d89"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.931635 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" event={"ID":"e8913a76-5e7d-4d49-a9a4-388c052cf594","Type":"ContainerStarted","Data":"556fd9337129fdf7ae397c61bcea7e5a7dd52417c1ae7a81c0e987353e859c9e"} Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.934472 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.935109 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.943035 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gmjwg" Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.977371 4624 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pcvf9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 28 03:37:34 crc kubenswrapper[4624]: I0228 03:37:34.977430 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.014776 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.015374 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phc2t\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-kube-api-access-phc2t\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.015694 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-bound-sa-token\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.016157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823705e-af27-4b37-98f8-f73d31f69e02-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.016331 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-trusted-ca\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.016384 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-registry-certificates\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.016458 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-registry-tls\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.021881 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.521847872 +0000 UTC m=+110.185887181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.028449 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-trusted-ca\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.028964 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.029416 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823705e-af27-4b37-98f8-f73d31f69e02-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.034866 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-registry-certificates\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.055852 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.555829844 +0000 UTC m=+110.219869153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.058611 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823705e-af27-4b37-98f8-f73d31f69e02-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.069488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823705e-af27-4b37-98f8-f73d31f69e02-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.073378 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.092003 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phc2t\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-kube-api-access-phc2t\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.094699 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-registry-tls\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.113957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-bound-sa-token\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.114032 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.116307 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-psbkg"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.131826 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.131944 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.631924688 +0000 UTC m=+110.295963997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.132299 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.132610 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.632601315 +0000 UTC m=+110.296640624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.233687 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.234164 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.73414914 +0000 UTC m=+110.398188449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.343453 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.343812 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.843798133 +0000 UTC m=+110.507837442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.416880 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.444193 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.444580 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:35.944565096 +0000 UTC m=+110.608604405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.453773 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" podStartSLOduration=50.453752415 podStartE2EDuration="50.453752415s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:35.453464708 +0000 UTC m=+110.117504017" watchObservedRunningTime="2026-02-28 03:37:35.453752415 +0000 UTC m=+110.117791724" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.482488 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.506347 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" podStartSLOduration=49.506321591 podStartE2EDuration="49.506321591s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:35.505708664 +0000 UTC m=+110.169747973" watchObservedRunningTime="2026-02-28 03:37:35.506321591 +0000 UTC m=+110.170360900" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.511284 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.529707 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.545635 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.546117 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.0460996 +0000 UTC m=+110.710138909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.648554 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.648893 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.148860116 +0000 UTC m=+110.812899425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.649338 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.649761 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.149746281 +0000 UTC m=+110.813785590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.662283 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x6j2h" podStartSLOduration=50.66225998 podStartE2EDuration="50.66225998s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:35.600070593 +0000 UTC m=+110.264109902" watchObservedRunningTime="2026-02-28 03:37:35.66225998 +0000 UTC m=+110.326299289" Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.760922 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.761449 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.261428639 +0000 UTC m=+110.925467948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.839850 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.862712 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.863154 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.363141217 +0000 UTC m=+111.027180526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.910723 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fwdgt"] Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.943373 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" event={"ID":"e7bc367c-d6e3-4b04-a16f-17ed7b69a796","Type":"ContainerStarted","Data":"338da0805673f65911d54d4f3bb0f52ba06d2789bbc394b2b5999d7af727a8a8"} Feb 28 03:37:35 crc kubenswrapper[4624]: I0228 03:37:35.967171 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:35 crc kubenswrapper[4624]: E0228 03:37:35.968297 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.468280898 +0000 UTC m=+111.132320207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.002983 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" event={"ID":"4dcc8081-77c7-47b8-a357-7bf604280bcf","Type":"ContainerStarted","Data":"4bb5a4d489ae686ca0b2db4c49f00c4a432fead89bf5621b206e84854681a81f"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.075521 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" podStartSLOduration=51.075494886 podStartE2EDuration="51.075494886s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:36.053695325 +0000 UTC m=+110.717734634" watchObservedRunningTime="2026-02-28 03:37:36.075494886 +0000 UTC m=+110.739534185" Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.076389 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.078204 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.578185859 +0000 UTC m=+111.242225168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.178530 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.179232 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.679209739 +0000 UTC m=+111.343249058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.285198 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.296324 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.796306685 +0000 UTC m=+111.460345994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.316170 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9"] Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.316219 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" event={"ID":"62ec2f2d-58e7-41bf-969d-b91b920c9faa","Type":"ContainerStarted","Data":"e9f68006cd3c02a8d643cf6a4577e0859126d92e0b4c5f66a7bcdd929face07d"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.316238 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pcq7q" event={"ID":"940da15d-4365-40e8-9f00-33fecfb1e6c6","Type":"ContainerStarted","Data":"0ee8bf1db18577417280e1955b9014c6722d91b48b68eb16742e165df7b13bfa"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.415921 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.416476 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:36.916454132 +0000 UTC m=+111.580493441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.505141 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" podStartSLOduration=51.505114557 podStartE2EDuration="51.505114557s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:36.458861963 +0000 UTC m=+111.122901272" watchObservedRunningTime="2026-02-28 03:37:36.505114557 +0000 UTC m=+111.169153866" Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.517944 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.518380 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.018365796 +0000 UTC m=+111.682405105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.545205 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7wtnq" event={"ID":"4ff9da47-c5fc-413b-ba6d-d3c93594ea14","Type":"ContainerStarted","Data":"ea8581e703d0686fc8cd2a72842a4c729a8e3af7aa3acdd7ec748f02004631d9"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.555016 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-p56rc"] Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.568974 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" event={"ID":"05428018-12ae-4524-b6f0-3abae46397dd","Type":"ContainerStarted","Data":"50a89641baa1f5f472dd76313933d5a4821bce07eb4421f2701585a0c9cc3020"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.569329 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95"] Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.618531 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.618972 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.118955544 +0000 UTC m=+111.782994843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.645257 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd"] Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.651315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" event={"ID":"8ba76435-5533-4104-8fb6-b5be5f354eb6","Type":"ContainerStarted","Data":"018651a80f1bcf33bd1390240cb110eda8cd66f8df9dc25b8f5c63f1a58f98fc"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.665362 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gmjwg" event={"ID":"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4","Type":"ContainerStarted","Data":"4d70265b18a06448275f77c5df141a103a20f48602faf364889bf7a02240d23c"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.670877 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4mctl" event={"ID":"a59a9283-102e-4e3b-addd-bda023aabec2","Type":"ContainerStarted","Data":"1cbc43fc3cfa248729bfc60b46021789bc38524f9fd581ae02727c9843bf8196"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.671572 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.674855 4624 patch_prober.go:28] interesting pod/console-operator-58897d9998-4mctl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.674896 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4mctl" podUID="a59a9283-102e-4e3b-addd-bda023aabec2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.680750 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" event={"ID":"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8","Type":"ContainerStarted","Data":"8f8eee73f246abe8ff8b2be8b06ad140b314c3c408ce2cf1ad206d61f14add9d"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.687344 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" event={"ID":"407a407f-0b60-4ea0-8737-ee20b3cf6ce2","Type":"ContainerStarted","Data":"c8c500271204fbc2146546c1d1ab501e9b150ec08cc6c7c582f61f3ffd69579a"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.722049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.722605 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.222580674 +0000 UTC m=+111.886620173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.760464 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k"] Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.807150 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psbkg" event={"ID":"299baa07-011e-4629-808b-f86667b5cd82","Type":"ContainerStarted","Data":"7e3f5dba7d4f39f19f9814eb10bd6f7f5a45a222b18c7ae372aecb829e25fb9e"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.810223 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podStartSLOduration=5.81018194 podStartE2EDuration="5.81018194s" podCreationTimestamp="2026-02-28 03:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:36.806675775 +0000 UTC m=+111.470715084" watchObservedRunningTime="2026-02-28 03:37:36.81018194 +0000 UTC m=+111.474221249" Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.824954 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.826189 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.326155593 +0000 UTC m=+111.990194892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.842142 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" event={"ID":"e8913a76-5e7d-4d49-a9a4-388c052cf594","Type":"ContainerStarted","Data":"58873711eec886da6eddac8c3822efa5df942c4ed2483268d042eda2bfa84cb2"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.880431 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" event={"ID":"33362ea2-94ea-4770-863b-ff417db50389","Type":"ContainerStarted","Data":"f74797ebc033654b1765a2c9ca2664d853fb1bc6a663073d7dd59dbe9836a5cb"} Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.898914 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.927145 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:36 crc kubenswrapper[4624]: E0228 03:37:36.962253 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.462234823 +0000 UTC m=+112.126274132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:36 crc kubenswrapper[4624]: I0228 03:37:36.987910 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.044202 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.045898 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.545862092 +0000 UTC m=+112.209901401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.193430 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.194070 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.694057751 +0000 UTC m=+112.358097060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.278267 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zmfld"] Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.301576 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.301936 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.801922786 +0000 UTC m=+112.465962095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.362688 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4l6f" podStartSLOduration=52.362672133 podStartE2EDuration="52.362672133s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:37.323559512 +0000 UTC m=+111.987598821" watchObservedRunningTime="2026-02-28 03:37:37.362672133 +0000 UTC m=+112.026711442" Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.409648 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.410005 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:37.909994636 +0000 UTC m=+112.574033945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.490829 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-nhzzm" podStartSLOduration=51.490802828 podStartE2EDuration="51.490802828s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:37.401533737 +0000 UTC m=+112.065573046" watchObservedRunningTime="2026-02-28 03:37:37.490802828 +0000 UTC m=+112.154842147" Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.513683 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.514007 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.013993196 +0000 UTC m=+112.678032505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.614692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.615196 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.115077558 +0000 UTC m=+112.779116867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.665150 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h4pcb"] Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.715796 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.215756988 +0000 UTC m=+112.879796297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.725395 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.725953 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.726400 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.226364626 +0000 UTC m=+112.890403935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.830388 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.831114 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.331096726 +0000 UTC m=+112.995136035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.849286 4624 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pcvf9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.849343 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.887698 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" podStartSLOduration=51.8876492 podStartE2EDuration="51.8876492s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:37.878986145 +0000 UTC m=+112.543025454" watchObservedRunningTime="2026-02-28 03:37:37.8876492 +0000 UTC m=+112.551688509" Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.907835 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h4pcb" event={"ID":"e2acdca9-92fb-4bed-a5f4-cdffb5480e54","Type":"ContainerStarted","Data":"c0ab05f1762c953191b4e449baebf41b7b773ec3d213ca936f844ec144d005b3"} Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.918153 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sms8n"] Feb 28 03:37:37 crc kubenswrapper[4624]: I0228 03:37:37.947588 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:37 crc kubenswrapper[4624]: E0228 03:37:37.948044 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.448027447 +0000 UTC m=+113.112066766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.039159 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n28nm"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.059444 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.063304 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.063370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" event={"ID":"4dcc8081-77c7-47b8-a357-7bf604280bcf","Type":"ContainerStarted","Data":"4adaaf15995befcf5552dd1df56fd1c49efe358a0308a2a721b64b29a14610a3"} Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.067893 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.567843437 +0000 UTC m=+113.231882746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.075393 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7jk4g"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.152466 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" event={"ID":"d3ac81ca-3efe-4112-a8d0-9503bd1826b7","Type":"ContainerStarted","Data":"fa05227666d7b6d25a7df020d13d549053d43be18e403b20bfe9b430268f5b1c"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.156493 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hztbp"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.156642 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.161248 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.161599 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.661585169 +0000 UTC m=+113.325624478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.205473 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4mctl" podStartSLOduration=53.205449988 podStartE2EDuration="53.205449988s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.198488999 +0000 UTC m=+112.862528308" watchObservedRunningTime="2026-02-28 03:37:38.205449988 +0000 UTC m=+112.869489297" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.225148 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ssl5n" event={"ID":"63191dc2-3a46-435d-9e6d-158fe21737e1","Type":"ContainerStarted","Data":"294076c871f077b10e77d68db58107450ad355235fb4841b712eb316390f12a0"} Feb 28 03:37:38 crc kubenswrapper[4624]: W0228 03:37:38.259919 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbfd04f1_cedf_4a3d_b0a5_0a2130f02105.slice/crio-231857c2dd6af2df12d525d6ebed02c615288c14adfea7a55667c745b5cbc414 WatchSource:0}: Error finding container 231857c2dd6af2df12d525d6ebed02c615288c14adfea7a55667c745b5cbc414: Status 404 returned error can't find the container with id 231857c2dd6af2df12d525d6ebed02c615288c14adfea7a55667c745b5cbc414 Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.261215 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-rpmzg" event={"ID":"c4102d73-26d6-461b-ac53-bfb4592a5e2b","Type":"ContainerStarted","Data":"d798f084ae76369eea87d41d36821701ce209358eacb1c8b81549d8d6cf04a68"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.263025 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.264335 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.764308914 +0000 UTC m=+113.428348223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.272849 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q469k" podStartSLOduration=53.272834615 podStartE2EDuration="53.272834615s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.260718877 +0000 UTC m=+112.924758186" watchObservedRunningTime="2026-02-28 03:37:38.272834615 +0000 UTC m=+112.936873914" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.282329 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" event={"ID":"fe1b0a77-d59c-410a-bcbd-a17d327958ae","Type":"ContainerStarted","Data":"26fea3cd20a2642e6fae39788fbdbee3b4939e8736efa23349d8250aa3856c72"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.293764 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.320364 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg"] Feb 28 03:37:38 crc kubenswrapper[4624]: W0228 03:37:38.322484 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22b6c28f_48dd_4e0f_826e_db6301e5dfcb.slice/crio-0660851728b3190d8b1a81f574bc8c9862e486601b19728dcef391b3cbeef282 WatchSource:0}: Error finding container 0660851728b3190d8b1a81f574bc8c9862e486601b19728dcef391b3cbeef282: Status 404 returned error can't find the container with id 0660851728b3190d8b1a81f574bc8c9862e486601b19728dcef391b3cbeef282 Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.330651 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" event={"ID":"33362ea2-94ea-4770-863b-ff417db50389","Type":"ContainerStarted","Data":"abc3cbfc9ca6423a12160f0d578ed3607b03ca9a1e8149eb10d7e7adf1d1ce8c"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.336470 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" podStartSLOduration=52.336451171 podStartE2EDuration="52.336451171s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.333988084 +0000 UTC m=+112.998027383" watchObservedRunningTime="2026-02-28 03:37:38.336451171 +0000 UTC m=+113.000490480" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.353674 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" event={"ID":"11d82af0-7eea-4c15-af5d-e58d1a0b6721","Type":"ContainerStarted","Data":"6d412b7b62a3c29bbe1aef7e50ac7f83b967ba153d81e65a6b048bbc4d394ee3"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.372006 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.374357 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.874339359 +0000 UTC m=+113.538378868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.376037 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" event={"ID":"3912910a-bd9b-4b5d-a67a-c6929de727b9","Type":"ContainerStarted","Data":"fe49544d46fadc77d90af17ee03c543326caf0b0e8e05d504b9282e40f9630d9"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.432602 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ssl5n" podStartSLOduration=53.432579867 podStartE2EDuration="53.432579867s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.430015548 +0000 UTC m=+113.094054857" watchObservedRunningTime="2026-02-28 03:37:38.432579867 +0000 UTC m=+113.096619176" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.433218 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.441390 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pcq7q" event={"ID":"940da15d-4365-40e8-9f00-33fecfb1e6c6","Type":"ContainerStarted","Data":"e3d33c149d193675518274138745f3dd0c63fd50447184c9d83ff7b4e43b73d3"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.475733 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.476036 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:38.976007135 +0000 UTC m=+113.640046634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.478120 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" event={"ID":"05428018-12ae-4524-b6f0-3abae46397dd","Type":"ContainerStarted","Data":"93b9723a90218b23c8f822fb057f9679c1844fb6ac494eb4c249cee52e99b646"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.525121 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" event={"ID":"84768df9-4913-419b-b808-353e55de412b","Type":"ContainerStarted","Data":"2ec8002feed91be21dc43ad977c5c6e5a1326199072cb85ecaf5890b2502def0"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.560659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" event={"ID":"f32058b9-ce5b-414e-9533-69a136730886","Type":"ContainerStarted","Data":"d1d1d25d84d1d324c9ca0ecb196cb06472bbc465043c77b15aa9d7bbdf13799a"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.574952 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" event={"ID":"72bd2f2c-ca0d-4cde-b5a6-657236634f37","Type":"ContainerStarted","Data":"61305f7b7712eaae467cdb2655677d53f71199c35f2fb71404da4903d082eb8d"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.578555 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.585589 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.085569927 +0000 UTC m=+113.749609236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.588593 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" event={"ID":"d7620f1c-2f80-43b3-ac28-8b3298c4ded6","Type":"ContainerStarted","Data":"caee2096c7fd7b7d82a50fc5f31dce210454320acffeefccb6a04f31d9b3fef5"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.658100 4624 generic.go:334] "Generic (PLEG): container finished" podID="8ba76435-5533-4104-8fb6-b5be5f354eb6" containerID="018651a80f1bcf33bd1390240cb110eda8cd66f8df9dc25b8f5c63f1a58f98fc" exitCode=0 Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.658439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" event={"ID":"8ba76435-5533-4104-8fb6-b5be5f354eb6","Type":"ContainerDied","Data":"018651a80f1bcf33bd1390240cb110eda8cd66f8df9dc25b8f5c63f1a58f98fc"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.658748 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.681607 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psbkg" event={"ID":"299baa07-011e-4629-808b-f86667b5cd82","Type":"ContainerStarted","Data":"92ad1e7052b58a829f72af1ace2707b93c33e8b879a0d7888060c0ba20789867"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.682190 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.682988 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.684145 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.184121619 +0000 UTC m=+113.848160928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.685535 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.685604 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.709891 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5cq6" podStartSLOduration=52.709876217 podStartE2EDuration="52.709876217s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.553457746 +0000 UTC m=+113.217497055" watchObservedRunningTime="2026-02-28 03:37:38.709876217 +0000 UTC m=+113.373915516" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.738337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" event={"ID":"7bbee3e3-9ea7-4f61-a206-fd7f6058f208","Type":"ContainerStarted","Data":"cfc22d26b5e63efc60288c144dc09e3d1fbcb778b2983cd2a33a3a5eda6682b4"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.740995 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.749436 4624 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2j628 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.749504 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" podUID="7bbee3e3-9ea7-4f61-a206-fd7f6058f208" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.785678 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4mctl" event={"ID":"a59a9283-102e-4e3b-addd-bda023aabec2","Type":"ContainerStarted","Data":"caadf1634a46cb4b310071db7e2af18b566336fb161c997a3e704ebb8c3da2d4"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.787578 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.788046 4624 patch_prober.go:28] interesting pod/console-operator-58897d9998-4mctl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.788137 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4mctl" podUID="a59a9283-102e-4e3b-addd-bda023aabec2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.788784 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.288770207 +0000 UTC m=+113.952809516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.813515 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.813750 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.813785 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.825657 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" podStartSLOduration=53.825642597 podStartE2EDuration="53.825642597s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.724938686 +0000 UTC m=+113.388977995" watchObservedRunningTime="2026-02-28 03:37:38.825642597 +0000 UTC m=+113.489681896" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.827569 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9"] Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.839213 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pcq7q" podStartSLOduration=52.839189844 podStartE2EDuration="52.839189844s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.824010393 +0000 UTC m=+113.488049702" watchObservedRunningTime="2026-02-28 03:37:38.839189844 +0000 UTC m=+113.503229153" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.886229 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" event={"ID":"407a407f-0b60-4ea0-8737-ee20b3cf6ce2","Type":"ContainerStarted","Data":"48161fd6307674a3661a6e48276d2a81971dd676e8685c1a168a8e6c87b06832"} Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.887788 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.901293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.901469 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.401423502 +0000 UTC m=+114.065462811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.901692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:38 crc kubenswrapper[4624]: E0228 03:37:38.903448 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.403431117 +0000 UTC m=+114.067470426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.905257 4624 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x9vc9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.905318 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" podUID="407a407f-0b60-4ea0-8737-ee20b3cf6ce2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.926849 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" podStartSLOduration=53.92682324 podStartE2EDuration="53.92682324s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:38.926398899 +0000 UTC m=+113.590438208" watchObservedRunningTime="2026-02-28 03:37:38.92682324 +0000 UTC m=+113.590862549" Feb 28 03:37:38 crc kubenswrapper[4624]: I0228 03:37:38.953023 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" event={"ID":"62ec2f2d-58e7-41bf-969d-b91b920c9faa","Type":"ContainerStarted","Data":"c09fa5e102255393f0b9faaefe24ef3c9e4f947a6e17f2748321b5efa86e272e"} Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.005509 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.006656 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.506635095 +0000 UTC m=+114.170674404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.037769 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" event={"ID":"14131bc3-3a7f-4152-84c2-9410e7fe638f","Type":"ContainerStarted","Data":"5482d05490e802bce52e5b1a7c10eea021a3bba01cc218d2e1b106088d1cf9e4"} Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.057171 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" podStartSLOduration=54.057155255 podStartE2EDuration="54.057155255s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:39.056509848 +0000 UTC m=+113.720549147" watchObservedRunningTime="2026-02-28 03:37:39.057155255 +0000 UTC m=+113.721194564" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.090063 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.090355 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gmjwg" event={"ID":"b8a5a0f0-2716-43d5-a041-cf3a5cc72ca4","Type":"ContainerStarted","Data":"8fb616dfd46d72e4ad0e64ed1da90be272d296b1d25bfd3570c0317fc8b4d716"} Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.124430 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.124818 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.62480397 +0000 UTC m=+114.288843289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.204575 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" event={"ID":"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20","Type":"ContainerStarted","Data":"e86b39b64776f3457e642c2fa4d8adf6cd6897ae16b1965b3cf660da3c199647"} Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.218976 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.231235 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.232822 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.732797129 +0000 UTC m=+114.396836438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.339450 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.350947 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.850929742 +0000 UTC m=+114.514969051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.399137 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" podStartSLOduration=53.399106449 podStartE2EDuration="53.399106449s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:39.166184032 +0000 UTC m=+113.830223341" watchObservedRunningTime="2026-02-28 03:37:39.399106449 +0000 UTC m=+114.063145758" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.441857 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.442556 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:39.942529376 +0000 UTC m=+114.606568685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.545973 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.546488 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.046472085 +0000 UTC m=+114.710511394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.651060 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.651544 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.151529084 +0000 UTC m=+114.815568393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.683898 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" podStartSLOduration=53.679371189 podStartE2EDuration="53.679371189s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:39.407516417 +0000 UTC m=+114.071555726" watchObservedRunningTime="2026-02-28 03:37:39.679371189 +0000 UTC m=+114.343410498" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.754201 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.754554 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.254539987 +0000 UTC m=+114.918579296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.778097 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-psbkg" podStartSLOduration=54.778062875 podStartE2EDuration="54.778062875s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:39.685870565 +0000 UTC m=+114.349909874" watchObservedRunningTime="2026-02-28 03:37:39.778062875 +0000 UTC m=+114.442102194" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.778328 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" podStartSLOduration=53.778323912 podStartE2EDuration="53.778323912s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:39.777562062 +0000 UTC m=+114.441601371" watchObservedRunningTime="2026-02-28 03:37:39.778323912 +0000 UTC m=+114.442363221" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.816952 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:39 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:39 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:39 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.817515 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.860914 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.861708 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.361688523 +0000 UTC m=+115.025727832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:39 crc kubenswrapper[4624]: I0228 03:37:39.965854 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:39 crc kubenswrapper[4624]: E0228 03:37:39.966261 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.466250289 +0000 UTC m=+115.130289598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.018618 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" podStartSLOduration=54.018599638 podStartE2EDuration="54.018599638s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:39.893555148 +0000 UTC m=+114.557594457" watchObservedRunningTime="2026-02-28 03:37:40.018599638 +0000 UTC m=+114.682638947" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.102150 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.102369 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.602337119 +0000 UTC m=+115.266376418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.102802 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.103332 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.603298746 +0000 UTC m=+115.267338055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.161818 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" podStartSLOduration=55.161800082 podStartE2EDuration="55.161800082s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.021959179 +0000 UTC m=+114.685998488" watchObservedRunningTime="2026-02-28 03:37:40.161800082 +0000 UTC m=+114.825839381" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.203776 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.204272 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.704256194 +0000 UTC m=+115.368295503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.241466 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" event={"ID":"7bbee3e3-9ea7-4f61-a206-fd7f6058f208","Type":"ContainerStarted","Data":"3e0b0b9ac2478ebef5ab18e0f5d2a615dc1019d670f85d0800c0bb41a2ea2d98"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.262337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7jk4g" event={"ID":"22b6c28f-48dd-4e0f-826e-db6301e5dfcb","Type":"ContainerStarted","Data":"0660851728b3190d8b1a81f574bc8c9862e486601b19728dcef391b3cbeef282"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.293410 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" event={"ID":"4dcc8081-77c7-47b8-a357-7bf604280bcf","Type":"ContainerStarted","Data":"2d62d49e610c12f24bf4aa33b6cc79252b959292c2cbb288cd20ec6eb7a84e2e"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.294248 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.305994 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.306484 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.806468855 +0000 UTC m=+115.470508164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.314410 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-p56rc" event={"ID":"0d4939ae-cdfa-40d6-93aa-7a66bdc2ac20","Type":"ContainerStarted","Data":"44394b579af29088a0077dce2285ad20e11592070ec2acb6a68d940a488d506c"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.329852 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" event={"ID":"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c","Type":"ContainerStarted","Data":"d11d88ba4f6653a14d123035af01cfd6d68d4e4c778898105634b1ed5dd67f4c"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.329922 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" event={"ID":"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c","Type":"ContainerStarted","Data":"99f1c725cfec6d78fe99c3017c4728510a6568c04830ce4fda2da6da4daedb4c"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.348332 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" event={"ID":"3912910a-bd9b-4b5d-a67a-c6929de727b9","Type":"ContainerStarted","Data":"063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.349383 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.350726 4624 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zmfld container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.350823 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.368003 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h4pcb" event={"ID":"e2acdca9-92fb-4bed-a5f4-cdffb5480e54","Type":"ContainerStarted","Data":"fe1aab38f52b569380a3269996cf6e262d214a09a69e3cc14df59951dfae9d10"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.379224 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gmjwg" podStartSLOduration=9.379195058 podStartE2EDuration="9.379195058s" podCreationTimestamp="2026-02-28 03:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.183883921 +0000 UTC m=+114.847923230" watchObservedRunningTime="2026-02-28 03:37:40.379195058 +0000 UTC m=+115.043234357" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.386563 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hr44k" event={"ID":"72bd2f2c-ca0d-4cde-b5a6-657236634f37","Type":"ContainerStarted","Data":"4e8412ca6edb80cdb236633016773901cc24b32b33816dde469169de6601799a"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.397562 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bsm95" event={"ID":"14131bc3-3a7f-4152-84c2-9410e7fe638f","Type":"ContainerStarted","Data":"58bd04dec27f25118e565aa137fcdc3e1caadde575151286c0bf79fe2efc8bf2"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.409778 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.409912 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.90988834 +0000 UTC m=+115.573927649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.410581 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.412331 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:40.912316655 +0000 UTC m=+115.576355964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.413905 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-m4qb9" event={"ID":"d3ac81ca-3efe-4112-a8d0-9503bd1826b7","Type":"ContainerStarted","Data":"87236c2e69b2eceb1490517a35bc7878198d2f29b9fb5e54e0b496b018b21329"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.430727 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" event={"ID":"3293163b-b75d-40f1-b004-8d938c413a4b","Type":"ContainerStarted","Data":"475bb1c7bae28a1c71926402d301983685ea1bbe9068b16dcb4ce2ab5a0cc6d6"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.439747 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" event={"ID":"556785c7-5f4e-4e1f-b7b1-13b5c0653ee8","Type":"ContainerStarted","Data":"d5016192bdce8999f4b4f471c25c4f392b9b28ad3dd96552b8fb53ea3fd02929"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.444572 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39690: no serving certificate available for the kubelet" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.446291 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" event={"ID":"f55f77b6-8205-47a4-a420-f1cd7ffc7411","Type":"ContainerStarted","Data":"efb0388ef31e48dd922c6011baf435c0408355b1ec64d1e98e2e31755789798f"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.470301 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8glmv" podStartSLOduration=55.470285388 podStartE2EDuration="55.470285388s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.410338422 +0000 UTC m=+115.074377731" watchObservedRunningTime="2026-02-28 03:37:40.470285388 +0000 UTC m=+115.134324697" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.482695 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-spbcs" event={"ID":"05428018-12ae-4524-b6f0-3abae46397dd","Type":"ContainerStarted","Data":"52133bfbc913abbe79adf16b1acfb777e122b433b762a4144d09473d29e4fa01"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.487801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" event={"ID":"19844673-a712-45d6-8b90-ddd98c2f1e97","Type":"ContainerStarted","Data":"88de956a13e405e807c1bc9d81352c90be39cf0cdfc8b24e064f2faf0117548b"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.515959 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.516437 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" event={"ID":"58b397be-ff15-406b-96fb-0bc29f605c61","Type":"ContainerStarted","Data":"bc527add94a4d8535c39d29ea05f4ce872f7b1ef0a1318ab9a03718e31aa330f"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.517105 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.517197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" event={"ID":"58b397be-ff15-406b-96fb-0bc29f605c61","Type":"ContainerStarted","Data":"5e467c2fe7d5710e6d5607430a32a7d70d90851daa8bd2aa3a0a5ca7efa07647"} Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.516840 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.01681842 +0000 UTC m=+115.680857729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.518529 4624 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rcgs6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.518577 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" podUID="58b397be-ff15-406b-96fb-0bc29f605c61" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.545560 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" event={"ID":"8ba76435-5533-4104-8fb6-b5be5f354eb6","Type":"ContainerStarted","Data":"5c59b337878c0fe408ac12e7c5484d2a313f8f24c1329cfe6743e07ff81bf682"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.571440 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" event={"ID":"11d82af0-7eea-4c15-af5d-e58d1a0b6721","Type":"ContainerStarted","Data":"e144a3f56cecfc8f771aa54c3eb17b7492d2659f7761160abe9f8827cf138078"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.592578 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podStartSLOduration=54.592551393 podStartE2EDuration="54.592551393s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.59057571 +0000 UTC m=+115.254615009" watchObservedRunningTime="2026-02-28 03:37:40.592551393 +0000 UTC m=+115.256590702" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.621346 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.624261 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.124243283 +0000 UTC m=+115.788282592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.652052 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" event={"ID":"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b","Type":"ContainerStarted","Data":"9d160af566e3452df2bd05cc1ed453cbaeab12f7ccb882e11b7f6811ec4d0447"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.652132 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" event={"ID":"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b","Type":"ContainerStarted","Data":"dcdbffce791a76be0a35e3a2ae36d42b08343125dcfa1bed18b893b4c72cc923"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.722988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" event={"ID":"fe1b0a77-d59c-410a-bcbd-a17d327958ae","Type":"ContainerStarted","Data":"b744cdeb582755340e458558a9aae7ed5a2aa910bfab9c4c29b5584d34082edc"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.726406 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.727116 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.227097922 +0000 UTC m=+115.891137231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.728436 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9rk94" podStartSLOduration=54.728415148 podStartE2EDuration="54.728415148s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.725440287 +0000 UTC m=+115.389479596" watchObservedRunningTime="2026-02-28 03:37:40.728415148 +0000 UTC m=+115.392454467" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.729904 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" event={"ID":"84768df9-4913-419b-b808-353e55de412b","Type":"ContainerStarted","Data":"ecb15c9d3bcf0c2d151a2a376b6ad90317c475a9955a521f083c0b1e2528cde9"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.729947 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" event={"ID":"84768df9-4913-419b-b808-353e55de412b","Type":"ContainerStarted","Data":"6d309f83cc07cd5f9f442ccbcb6eb20e0f2d5af88df0d1fbdbbab4b407bc4b58"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.740855 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39696: no serving certificate available for the kubelet" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.779815 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" event={"ID":"29229b27-aee6-4450-ade6-ed702af8d343","Type":"ContainerStarted","Data":"1c916dedcc90e49914c6bec855fe4a98f60d403bdfaa9bec9c1fc698e8560e58"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.780044 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" event={"ID":"29229b27-aee6-4450-ade6-ed702af8d343","Type":"ContainerStarted","Data":"dd7734e6b3ba41281dfa8232c2915c7f7849df0fca2b8e9bddaac7ab09405edd"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.803287 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" event={"ID":"f32058b9-ce5b-414e-9533-69a136730886","Type":"ContainerStarted","Data":"a1406f826c8cfd12fd6d8dd030d8b69b84c655e18b57f70dba4d1e90d80142f8"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.803328 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" event={"ID":"f32058b9-ce5b-414e-9533-69a136730886","Type":"ContainerStarted","Data":"016f352e95d8b8187c40f15ad5393b02eec97dd027251fa0b254d7c0ca46d755"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.819520 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:40 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:40 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:40 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.819710 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.828518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" event={"ID":"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105","Type":"ContainerStarted","Data":"7336542bec9dda47951e4aab7a6423fee168d7848502832c813b66525815f7fa"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.828986 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" event={"ID":"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105","Type":"ContainerStarted","Data":"231857c2dd6af2df12d525d6ebed02c615288c14adfea7a55667c745b5cbc414"} Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.832687 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" gracePeriod=30 Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.833638 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.833740 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.835237 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.835458 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.835614 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.842875 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.842119 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.846189 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.346175391 +0000 UTC m=+116.010214700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.848821 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.861939 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h4pcb" podStartSLOduration=9.861916628 podStartE2EDuration="9.861916628s" podCreationTimestamp="2026-02-28 03:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.830507967 +0000 UTC m=+115.494547276" watchObservedRunningTime="2026-02-28 03:37:40.861916628 +0000 UTC m=+115.525955927" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.886125 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.887165 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.903660 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.911805 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.936414 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.951446 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.961248 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.961642 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:40 crc kubenswrapper[4624]: E0228 03:37:40.962447 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.462427914 +0000 UTC m=+116.126467223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.991042 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" podStartSLOduration=54.991006249 podStartE2EDuration="54.991006249s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:40.984560684 +0000 UTC m=+115.648599993" watchObservedRunningTime="2026-02-28 03:37:40.991006249 +0000 UTC m=+115.655045558" Feb 28 03:37:40 crc kubenswrapper[4624]: I0228 03:37:40.991381 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8-metrics-certs\") pod \"network-metrics-daemon-85p9r\" (UID: \"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8\") " pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.023513 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39710: no serving certificate available for the kubelet" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.051251 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x9vc9" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.053668 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fwdgt" podStartSLOduration=55.053658948 podStartE2EDuration="55.053658948s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:41.052696892 +0000 UTC m=+115.716736201" watchObservedRunningTime="2026-02-28 03:37:41.053658948 +0000 UTC m=+115.717698257" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.064490 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.064919 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.564902953 +0000 UTC m=+116.228942252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.166873 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.167576 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.667554757 +0000 UTC m=+116.331594066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.220432 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-85p9r" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.239422 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" podStartSLOduration=56.239402855 podStartE2EDuration="56.239402855s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:41.147800012 +0000 UTC m=+115.811839321" watchObservedRunningTime="2026-02-28 03:37:41.239402855 +0000 UTC m=+115.903442154" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.241919 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-h69wx" podStartSLOduration=55.241912904 podStartE2EDuration="55.241912904s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:41.234590115 +0000 UTC m=+115.898629424" watchObservedRunningTime="2026-02-28 03:37:41.241912904 +0000 UTC m=+115.905952203" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.243837 4624 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2j628 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.243932 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" podUID="7bbee3e3-9ea7-4f61-a206-fd7f6058f208" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.261346 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39714: no serving certificate available for the kubelet" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.268991 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.269605 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.769593545 +0000 UTC m=+116.433632854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.323900 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" podStartSLOduration=56.323886356 podStartE2EDuration="56.323886356s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:41.322858549 +0000 UTC m=+115.986897858" watchObservedRunningTime="2026-02-28 03:37:41.323886356 +0000 UTC m=+115.987925665" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.375819 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.377549 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.877531862 +0000 UTC m=+116.541571171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.436336 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" podStartSLOduration=55.436318815999996 podStartE2EDuration="55.436318816s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:41.388516069 +0000 UTC m=+116.052555378" watchObservedRunningTime="2026-02-28 03:37:41.436318816 +0000 UTC m=+116.100358125" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.457127 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39728: no serving certificate available for the kubelet" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.479181 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.479698 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:41.979685052 +0000 UTC m=+116.643724361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.580906 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.581378 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.081362449 +0000 UTC m=+116.745401758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.636464 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39744: no serving certificate available for the kubelet" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.682496 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.682805 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.18279363 +0000 UTC m=+116.846832939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.783659 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.784002 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.283986684 +0000 UTC m=+116.948025993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.809166 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:41 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:41 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:41 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.809249 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.830423 4624 patch_prober.go:28] interesting pod/console-operator-58897d9998-4mctl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.830487 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4mctl" podUID="a59a9283-102e-4e3b-addd-bda023aabec2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.832841 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39760: no serving certificate available for the kubelet" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.837997 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" event={"ID":"c2e6cc2d-b396-4eb2-8775-5836cc6ef10c","Type":"ContainerStarted","Data":"d3110e6593d609dacd9242072c18bcfd2d02db2fef2e3ed26cc1b2970bca186d"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.842203 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.847880 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.848518 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.851995 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" event={"ID":"fbfd04f1-cedf-4a3d-b0a5-0a2130f02105","Type":"ContainerStarted","Data":"c73d0c8c97a645d5b2620399484c8bd2b1846f0016c7cf153ebc78396ccb33d0"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.866530 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" event={"ID":"19844673-a712-45d6-8b90-ddd98c2f1e97","Type":"ContainerStarted","Data":"ebbbd355f68aa51e6c91391ab183ca123be1fc28e9d4f25bc70619b4b9cc586e"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.872753 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" event={"ID":"3293163b-b75d-40f1-b004-8d938c413a4b","Type":"ContainerStarted","Data":"eb23edd6930e4fddb9e1a3bb7dcc8618ee9afa2dec0889c22045c5e723949723"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.884794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.886279 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.386266217 +0000 UTC m=+117.050305526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.887581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" event={"ID":"29229b27-aee6-4450-ade6-ed702af8d343","Type":"ContainerStarted","Data":"9ba65afb9c34338ebf49829a1bece921028c1c6a55d4b67082623a761072a62f"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.898017 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" event={"ID":"f55f77b6-8205-47a4-a420-f1cd7ffc7411","Type":"ContainerStarted","Data":"a70b3dc0c17c1ec6d4070dc014bc02f7ce4fd57a30a53831e7e38f6bf7514ade"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.912281 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7jk4g" event={"ID":"22b6c28f-48dd-4e0f-826e-db6301e5dfcb","Type":"ContainerStarted","Data":"b5c348ede15be7d1ba022d90d9083f1a233bc8dba819d6c58c4612b4e79bdf0f"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.912331 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7jk4g" event={"ID":"22b6c28f-48dd-4e0f-826e-db6301e5dfcb","Type":"ContainerStarted","Data":"5008bf9e84b44fcd7601994b96eae5a6499060ae9db17e16e0c84a50257d2e77"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.912377 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.927187 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" event={"ID":"d2e3fe0a-7ec1-40d6-9df9-abe0c9a40a8b","Type":"ContainerStarted","Data":"4912b39fce5c3645c8477a415ab2c8f7d4020734fbafe5c728a5e1c584ec614e"} Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.929388 4624 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zmfld container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.929659 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.929717 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.929432 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.938494 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j4fmq" podStartSLOduration=55.938469193 podStartE2EDuration="55.938469193s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:41.937550179 +0000 UTC m=+116.601589488" watchObservedRunningTime="2026-02-28 03:37:41.938469193 +0000 UTC m=+116.602508502" Feb 28 03:37:41 crc kubenswrapper[4624]: I0228 03:37:41.987701 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:41 crc kubenswrapper[4624]: E0228 03:37:41.990224 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.490204356 +0000 UTC m=+117.154243665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.032974 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rcgs6" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.091112 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.091536 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.591520764 +0000 UTC m=+117.255560073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.107624 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2j628" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.192073 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.192695 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.692660817 +0000 UTC m=+117.356700116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.294882 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.295629 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.795614788 +0000 UTC m=+117.459654097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.397666 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.398224 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:42.898196701 +0000 UTC m=+117.562236010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.416664 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39766: no serving certificate available for the kubelet" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.499437 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.500063 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.000040292 +0000 UTC m=+117.664079601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.600675 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.600730 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.100707152 +0000 UTC m=+117.764746451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.601525 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.601967 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.101950946 +0000 UTC m=+117.765990245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.702471 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzkh5" podStartSLOduration=56.702451092 podStartE2EDuration="56.702451092s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:42.423171347 +0000 UTC m=+117.087210656" watchObservedRunningTime="2026-02-28 03:37:42.702451092 +0000 UTC m=+117.366490401" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.703176 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.703549 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.203526351 +0000 UTC m=+117.867565660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.787701 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fl4dx" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.804771 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.805308 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.30528356 +0000 UTC m=+117.969322869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.815845 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:42 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:42 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:42 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.815914 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.891604 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39156: no serving certificate available for the kubelet" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.906115 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.906435 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.406375612 +0000 UTC m=+118.070414921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.906700 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:42 crc kubenswrapper[4624]: E0228 03:37:42.907059 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.40704521 +0000 UTC m=+118.071084519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.917125 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n28nm" podStartSLOduration=57.917106273 podStartE2EDuration="57.917106273s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:42.704695023 +0000 UTC m=+117.368734332" watchObservedRunningTime="2026-02-28 03:37:42.917106273 +0000 UTC m=+117.581145572" Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.946074 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xk27r"] Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.947448 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerName="controller-manager" containerID="cri-o://954af90411c39e7a6714e8e488b9ede03c6f56d35dab831b3cce882e17fa92f9" gracePeriod=30 Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.972238 4624 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zmfld container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 28 03:37:42 crc kubenswrapper[4624]: I0228 03:37:42.972325 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.009975 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.010556 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.510526596 +0000 UTC m=+118.174565895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.032286 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n22ft" podStartSLOduration=57.032260995 podStartE2EDuration="57.032260995s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:42.960572291 +0000 UTC m=+117.624611590" watchObservedRunningTime="2026-02-28 03:37:43.032260995 +0000 UTC m=+117.696300304" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.112671 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.131589 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.631568958 +0000 UTC m=+118.295608267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.164493 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.165487 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.184616 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gb7fg" podStartSLOduration=57.184591446 podStartE2EDuration="57.184591446s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:43.109637564 +0000 UTC m=+117.773676873" watchObservedRunningTime="2026-02-28 03:37:43.184591446 +0000 UTC m=+117.848630765" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.210694 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.211146 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" podUID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" containerName="route-controller-manager" containerID="cri-o://b5da23d9fc78ece75b213428a7cb25e40cdd4a8a37197da0ab006f608aa307ff" gracePeriod=30 Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.216167 4624 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tztw9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]log ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]etcd ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/max-in-flight-filter ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 28 03:37:43 crc kubenswrapper[4624]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 28 03:37:43 crc kubenswrapper[4624]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/project.openshift.io-projectcache ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-startinformers ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 28 03:37:43 crc kubenswrapper[4624]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 03:37:43 crc kubenswrapper[4624]: livez check failed Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.216599 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" podUID="fe1b0a77-d59c-410a-bcbd-a17d327958ae" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.216185 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.216261 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.716230965 +0000 UTC m=+118.380270274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.217117 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.217494 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.717483059 +0000 UTC m=+118.381522368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.236184 4624 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xk27r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": read tcp 10.217.0.2:54886->10.217.0.8:8443: read: connection reset by peer" start-of-body= Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.236379 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": read tcp 10.217.0.2:54886->10.217.0.8:8443: read: connection reset by peer" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.254695 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.254734 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.282766 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cm5l9" podStartSLOduration=57.282751478 podStartE2EDuration="57.282751478s" podCreationTimestamp="2026-02-28 03:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:43.27912036 +0000 UTC m=+117.943159669" watchObservedRunningTime="2026-02-28 03:37:43.282751478 +0000 UTC m=+117.946790787" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.284619 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-85p9r"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.319695 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.321432 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.821407727 +0000 UTC m=+118.485447036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.324454 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.416576 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39158: no serving certificate available for the kubelet" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.417618 4624 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z4285 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.417735 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" podUID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.424403 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.424838 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:43.924826422 +0000 UTC m=+118.588865731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.442735 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7jk4g" podStartSLOduration=12.442716547 podStartE2EDuration="12.442716547s" podCreationTimestamp="2026-02-28 03:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:43.440857566 +0000 UTC m=+118.104896875" watchObservedRunningTime="2026-02-28 03:37:43.442716547 +0000 UTC m=+118.106755856" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.528699 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.529039 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.029022307 +0000 UTC m=+118.693061616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.568105 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=32.568068236 podStartE2EDuration="32.568068236s" podCreationTimestamp="2026-02-28 03:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:43.56082512 +0000 UTC m=+118.224864429" watchObservedRunningTime="2026-02-28 03:37:43.568068236 +0000 UTC m=+118.232107545" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.603016 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jjcv7"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.604680 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.617204 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.629065 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjcv7"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.634232 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.634706 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.134693843 +0000 UTC m=+118.798733152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.735294 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.735693 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.235595929 +0000 UTC m=+118.899635228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.735971 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-catalog-content\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.736064 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rw2v\" (UniqueName: \"kubernetes.io/projected/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-kube-api-access-5rw2v\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.736169 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.736272 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-utilities\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.736677 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.236669029 +0000 UTC m=+118.900708338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.819205 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:43 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:43 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:43 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.819259 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.823624 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2n698"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.824704 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.840747 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.841036 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-utilities\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.841195 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-catalog-content\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.841227 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rw2v\" (UniqueName: \"kubernetes.io/projected/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-kube-api-access-5rw2v\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.841893 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-utilities\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.841985 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-catalog-content\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.842209 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.34218821 +0000 UTC m=+119.006227519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.863408 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.888130 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.900381 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.915182 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n698"] Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.928966 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.929039 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.945114 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2jm\" (UniqueName: \"kubernetes.io/projected/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-kube-api-access-bj2jm\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.945149 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-catalog-content\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.945230 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.945276 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-utilities\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:43 crc kubenswrapper[4624]: E0228 03:37:43.945681 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.445669466 +0000 UTC m=+119.109708775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.946872 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rw2v\" (UniqueName: \"kubernetes.io/projected/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-kube-api-access-5rw2v\") pod \"community-operators-jjcv7\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.977424 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.978800 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qw86g"] Feb 28 03:37:43 crc kubenswrapper[4624]: I0228 03:37:43.980419 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.012337 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-85p9r" event={"ID":"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8","Type":"ContainerStarted","Data":"7890cfd7f1d0b0f1870e8aaf7facc5eeafd6467fe7797f887488beb6c3a3119a"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.030368 4624 generic.go:334] "Generic (PLEG): container finished" podID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" containerID="b5da23d9fc78ece75b213428a7cb25e40cdd4a8a37197da0ab006f608aa307ff" exitCode=0 Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.030497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" event={"ID":"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba","Type":"ContainerDied","Data":"b5da23d9fc78ece75b213428a7cb25e40cdd4a8a37197da0ab006f608aa307ff"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.048346 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f671c2fcca0296a2238f233fdcd404d4c706916fcf919331cffa6d981d5e6a24"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049355 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049663 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-catalog-content\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049726 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-utilities\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049777 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj2jm\" (UniqueName: \"kubernetes.io/projected/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-kube-api-access-bj2jm\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049811 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-catalog-content\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049835 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx54v\" (UniqueName: \"kubernetes.io/projected/1f8aeb46-02be-4b30-abdb-7c378da509ba-kube-api-access-rx54v\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.049894 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-utilities\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.050367 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-utilities\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.050465 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.550445038 +0000 UTC m=+119.214484337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.053929 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qw86g"] Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.056655 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-catalog-content\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.081895 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.081940 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.090272 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" event={"ID":"3293163b-b75d-40f1-b004-8d938c413a4b","Type":"ContainerStarted","Data":"7b41fa4f8cbeaee14872d73ed70fd8f0206bff40786d0927cdf8ba7a8266b6bd"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.091883 4624 patch_prober.go:28] interesting pod/console-f9d7485db-ssl5n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.091991 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ssl5n" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.137638 4624 generic.go:334] "Generic (PLEG): container finished" podID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerID="954af90411c39e7a6714e8e488b9ede03c6f56d35dab831b3cce882e17fa92f9" exitCode=0 Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.139550 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" event={"ID":"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c","Type":"ContainerDied","Data":"954af90411c39e7a6714e8e488b9ede03c6f56d35dab831b3cce882e17fa92f9"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.151096 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.151586 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-catalog-content\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.151643 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-utilities\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.151737 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx54v\" (UniqueName: \"kubernetes.io/projected/1f8aeb46-02be-4b30-abdb-7c378da509ba-kube-api-access-rx54v\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.157577 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.657558792 +0000 UTC m=+119.321598101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.159630 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-catalog-content\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.165493 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0ccc16bbe620f072cb582de73b0f184f681e7ae33addd0221b00cc3e61631b52"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.166160 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-utilities\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.249957 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1d0d732310afa2f07301bf6511b5c744f15082c42ea16a346a45e8673ae1303a"} Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.254546 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.255351 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.755328824 +0000 UTC m=+119.419368133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.256180 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.256671 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.7566586 +0000 UTC m=+119.420697909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.267693 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4mctl" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.277729 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w4zpx" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.309777 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj2jm\" (UniqueName: \"kubernetes.io/projected/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-kube-api-access-bj2jm\") pod \"certified-operators-2n698\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.347567 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h895m"] Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.349127 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.356463 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h895m"] Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.357091 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.357290 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.857257448 +0000 UTC m=+119.521296757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.357625 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.372727 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.872704167 +0000 UTC m=+119.536743476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.403592 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx54v\" (UniqueName: \"kubernetes.io/projected/1f8aeb46-02be-4b30-abdb-7c378da509ba-kube-api-access-rx54v\") pod \"community-operators-qw86g\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.414283 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.414352 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.414365 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.414423 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.460911 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.461179 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znckv\" (UniqueName: \"kubernetes.io/projected/9a269916-9894-4dcf-99db-7df5a1791898-kube-api-access-znckv\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.461222 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-catalog-content\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.461276 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-utilities\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.461402 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:44.961379812 +0000 UTC m=+119.625419121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.492501 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.562326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.562440 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znckv\" (UniqueName: \"kubernetes.io/projected/9a269916-9894-4dcf-99db-7df5a1791898-kube-api-access-znckv\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.562474 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-catalog-content\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.562804 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.062780492 +0000 UTC m=+119.726819801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.562922 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-utilities\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.563306 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-catalog-content\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.563409 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-utilities\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.676689 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.677662 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.177647047 +0000 UTC m=+119.841686356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.681863 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.694616 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.779515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.780019 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.280002273 +0000 UTC m=+119.944041582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.809374 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.828170 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:44 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:44 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:44 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.828241 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.886838 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.887948 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.387932809 +0000 UTC m=+120.051972118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.904714 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.990815 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt78s\" (UniqueName: \"kubernetes.io/projected/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-kube-api-access-kt78s\") pod \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.991270 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-config\") pod \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.991344 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-serving-cert\") pod \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.991375 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-client-ca\") pod \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\" (UID: \"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba\") " Feb 28 03:37:44 crc kubenswrapper[4624]: I0228 03:37:44.991561 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:44 crc kubenswrapper[4624]: E0228 03:37:44.991861 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.491849987 +0000 UTC m=+120.155889296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.000304 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-kube-api-access-kt78s" (OuterVolumeSpecName: "kube-api-access-kt78s") pod "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" (UID: "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba"). InnerVolumeSpecName "kube-api-access-kt78s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.000904 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" (UID: "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.000949 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-config" (OuterVolumeSpecName: "config") pod "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" (UID: "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.001338 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" (UID: "282dd2f7-31bf-4b54-96c4-5c3dc1e834ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.071065 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znckv\" (UniqueName: \"kubernetes.io/projected/9a269916-9894-4dcf-99db-7df5a1791898-kube-api-access-znckv\") pod \"certified-operators-h895m\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.096806 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.097048 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt78s\" (UniqueName: \"kubernetes.io/projected/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-kube-api-access-kt78s\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.097060 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.097069 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.097093 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.097165 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.597147193 +0000 UTC m=+120.261186502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.198062 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.198442 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.69842818 +0000 UTC m=+120.362467489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.270733 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" event={"ID":"282dd2f7-31bf-4b54-96c4-5c3dc1e834ba","Type":"ContainerDied","Data":"bdbbeff4665c939188e772e896c1c252f17e9f650973ac5629fc6c9f3eca5592"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.271209 4624 scope.go:117] "RemoveContainer" containerID="b5da23d9fc78ece75b213428a7cb25e40cdd4a8a37197da0ab006f608aa307ff" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.271378 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.298576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-85p9r" event={"ID":"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8","Type":"ContainerStarted","Data":"8bd389a9e451b2817987b9a3368ae56d1c53c7dc3ed514642826f6ba481b29eb"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.298638 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-85p9r" event={"ID":"6d3ef819-9ba1-4cf0-a656-2bb6f50ec6b8","Type":"ContainerStarted","Data":"b5a11a23b41fbebcf7b83371fc9ca46c41aff1598e4aa95f484231f719d02204"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.300767 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.301433 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.801410572 +0000 UTC m=+120.465449881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.318833 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1fbeddb756c3465aea180061e6d103d78966e726bf1f837f442d9d3cf1d2f982"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.337746 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.365583 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" event={"ID":"3293163b-b75d-40f1-b004-8d938c413a4b","Type":"ContainerStarted","Data":"49211657cd1905f91b6b64772f4047f900fef763157f8c09854c334372066324"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.386985 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"26e79cdf349bdc8cef1dbe3d2d8f5d59a3019b891cb78c1abe1657c73284ec19"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.407872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.408298 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:45.90828279 +0000 UTC m=+120.572322099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.417435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"acf2d333b2a8dbc3a779473392b80d297267caf64cd001f1c3ddb849065b5254"} Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.418303 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.476382 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.509593 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.510009 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-client-ca\") pod \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.510055 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-config\") pod \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.510138 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-serving-cert\") pod \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.510162 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-proxy-ca-bundles\") pod \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.510189 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58vz8\" (UniqueName: \"kubernetes.io/projected/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-kube-api-access-58vz8\") pod \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\" (UID: \"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c\") " Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.511225 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.011210022 +0000 UTC m=+120.675249331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.511679 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" (UID: "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.512141 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-config" (OuterVolumeSpecName: "config") pod "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" (UID: "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.517977 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" (UID: "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.546486 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" (UID: "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.554221 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-kube-api-access-58vz8" (OuterVolumeSpecName: "kube-api-access-58vz8") pod "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" (UID: "c28db6c5-346b-4b5a-be0d-0a0165ae4c8c"). InnerVolumeSpecName "kube-api-access-58vz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.569465 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjcv7"] Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.586034 4624 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.613654 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.614049 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.614068 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.614098 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58vz8\" (UniqueName: \"kubernetes.io/projected/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-kube-api-access-58vz8\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.614109 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.614119 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.614252 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.114195925 +0000 UTC m=+120.778235234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.717780 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.718167 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.218149234 +0000 UTC m=+120.882188543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.821767 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:45 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:45 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:45 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.821823 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.828112 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.828430 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.328418905 +0000 UTC m=+120.992458214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.929417 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.929604 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.429569588 +0000 UTC m=+121.093608897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.930008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.930393 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.430377929 +0000 UTC m=+121.094417228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.957880 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2pjq"] Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.958138 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerName="controller-manager" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.958153 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerName="controller-manager" Feb 28 03:37:45 crc kubenswrapper[4624]: E0228 03:37:45.958168 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" containerName="route-controller-manager" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.958174 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" containerName="route-controller-manager" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.958286 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" containerName="controller-manager" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.958300 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" containerName="route-controller-manager" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.959155 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.967156 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 03:37:45 crc kubenswrapper[4624]: I0228 03:37:45.992681 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-85p9r" podStartSLOduration=60.992656368 podStartE2EDuration="1m0.992656368s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:45.981850875 +0000 UTC m=+120.645890174" watchObservedRunningTime="2026-02-28 03:37:45.992656368 +0000 UTC m=+120.656695677" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.014056 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2pjq"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.033043 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.036346 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-utilities\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.036419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-catalog-content\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.036774 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwb82\" (UniqueName: \"kubernetes.io/projected/51920ae4-b602-4113-b233-57fdef96cd52-kube-api-access-pwb82\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: E0228 03:37:46.037063 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.537035572 +0000 UTC m=+121.201074881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.076739 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.130379 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4285"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.140105 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwb82\" (UniqueName: \"kubernetes.io/projected/51920ae4-b602-4113-b233-57fdef96cd52-kube-api-access-pwb82\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.140419 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.140520 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-utilities\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.140602 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-catalog-content\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: E0228 03:37:46.140959 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.64094233 +0000 UTC m=+121.304981639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzbdn" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.141467 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-utilities\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.141797 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-catalog-content\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.155482 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39162: no serving certificate available for the kubelet" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.231372 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwb82\" (UniqueName: \"kubernetes.io/projected/51920ae4-b602-4113-b233-57fdef96cd52-kube-api-access-pwb82\") pod \"redhat-marketplace-v2pjq\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.244202 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:46 crc kubenswrapper[4624]: E0228 03:37:46.244798 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 03:37:46.744777256 +0000 UTC m=+121.408816565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.265092 4624 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-28T03:37:45.586093322Z","Handler":null,"Name":""} Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.265516 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.266704 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.281223 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.281505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.281712 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.281841 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.282213 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.282328 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.324174 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qw86g"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.344250 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.352582 4624 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.352630 4624 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.353358 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r5lxr"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.359363 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/973e4692-3689-4011-94c4-06df1913c988-serving-cert\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.359403 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-client-ca\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.359436 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-config\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.359501 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.359537 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbmbd\" (UniqueName: \"kubernetes.io/projected/973e4692-3689-4011-94c4-06df1913c988-kube-api-access-kbmbd\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.363688 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.393306 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.394664 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5lxr"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.394842 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.425622 4624 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.425701 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.474804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" event={"ID":"3293163b-b75d-40f1-b004-8d938c413a4b","Type":"ContainerStarted","Data":"176ea4bb2622ec0f2bb24886257158d58377a2e1be5545275612078b2e5d5bc0"} Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.476865 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-config\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.476917 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxdpz\" (UniqueName: \"kubernetes.io/projected/39526829-389b-49e1-8a31-5fee6a4ffa8f-kube-api-access-pxdpz\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.476995 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbmbd\" (UniqueName: \"kubernetes.io/projected/973e4692-3689-4011-94c4-06df1913c988-kube-api-access-kbmbd\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.477017 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-catalog-content\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.477040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-utilities\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.477057 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/973e4692-3689-4011-94c4-06df1913c988-serving-cert\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.477075 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-client-ca\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.477843 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-client-ca\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.487632 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-config\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.504571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/973e4692-3689-4011-94c4-06df1913c988-serving-cert\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.519706 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" event={"ID":"c28db6c5-346b-4b5a-be0d-0a0165ae4c8c","Type":"ContainerDied","Data":"af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f"} Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.519769 4624 scope.go:117] "RemoveContainer" containerID="954af90411c39e7a6714e8e488b9ede03c6f56d35dab831b3cce882e17fa92f9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.519894 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xk27r" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.549730 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerStarted","Data":"e69e1bddf6304b71c3f10939217df7c56a3840365ae97d915be28eaa2369b970"} Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.566179 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n698"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.578573 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-catalog-content\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.578616 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-utilities\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.578712 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxdpz\" (UniqueName: \"kubernetes.io/projected/39526829-389b-49e1-8a31-5fee6a4ffa8f-kube-api-access-pxdpz\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.579197 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-catalog-content\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.579520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-utilities\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.586418 4624 generic.go:334] "Generic (PLEG): container finished" podID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerID="7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76" exitCode=0 Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.587424 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbmbd\" (UniqueName: \"kubernetes.io/projected/973e4692-3689-4011-94c4-06df1913c988-kube-api-access-kbmbd\") pod \"route-controller-manager-5b479ddd7f-nppn9\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.590617 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerDied","Data":"7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76"} Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.590703 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerStarted","Data":"f6b2431c7cf2b1341dc4c749320ab100876d40a2bede0a11d8ab4fa1333821e1"} Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.616733 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxdpz\" (UniqueName: \"kubernetes.io/projected/39526829-389b-49e1-8a31-5fee6a4ffa8f-kube-api-access-pxdpz\") pod \"redhat-marketplace-r5lxr\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.621338 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.680327 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.735789 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h895m"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.742306 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.746190 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x54xc"] Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.747307 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.751758 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.783212 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-utilities\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.783757 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-catalog-content\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.783805 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msnmv\" (UniqueName: \"kubernetes.io/projected/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-kube-api-access-msnmv\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.789485 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" podStartSLOduration=15.789462867 podStartE2EDuration="15.789462867s" podCreationTimestamp="2026-02-28 03:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:46.788120871 +0000 UTC m=+121.452160180" watchObservedRunningTime="2026-02-28 03:37:46.789462867 +0000 UTC m=+121.453502176" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.803601 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x54xc"] Feb 28 03:37:46 crc kubenswrapper[4624]: W0228 03:37:46.812861 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a269916_9894_4dcf_99db_7df5a1791898.slice/crio-df6f3a08522e29fa0425b661b836b79ee8486faabc9ee7a67c1c49baea19a9b5 WatchSource:0}: Error finding container df6f3a08522e29fa0425b661b836b79ee8486faabc9ee7a67c1c49baea19a9b5: Status 404 returned error can't find the container with id df6f3a08522e29fa0425b661b836b79ee8486faabc9ee7a67c1c49baea19a9b5 Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.844504 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:46 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:46 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:46 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.844616 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.897847 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-utilities\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.897932 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-catalog-content\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.897985 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msnmv\" (UniqueName: \"kubernetes.io/projected/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-kube-api-access-msnmv\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.898684 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-utilities\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:46 crc kubenswrapper[4624]: I0228 03:37:46.986026 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-catalog-content\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.016872 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xk27r"] Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.032818 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msnmv\" (UniqueName: \"kubernetes.io/projected/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-kube-api-access-msnmv\") pod \"redhat-operators-x54xc\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.048018 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xk27r"] Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.052303 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzbdn\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.075582 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kgmt8"] Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.076857 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.084060 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.109149 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.122495 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kgmt8"] Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.203980 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.212582 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-catalog-content\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.212743 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-utilities\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.212864 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48dcb\" (UniqueName: \"kubernetes.io/projected/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-kube-api-access-48dcb\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.233577 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.240559 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.259990 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39172: no serving certificate available for the kubelet" Feb 28 03:37:47 crc kubenswrapper[4624]: E0228 03:37:47.290430 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f17b2_3180_41e3_a8cf_1f40338eadf0.slice/crio-7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76.scope\": RecentStats: unable to find data in memory cache]" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.313958 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48dcb\" (UniqueName: \"kubernetes.io/projected/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-kube-api-access-48dcb\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.314566 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-catalog-content\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.314603 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-utilities\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.315309 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-utilities\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.326687 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-catalog-content\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.353991 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48dcb\" (UniqueName: \"kubernetes.io/projected/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-kube-api-access-48dcb\") pod \"redhat-operators-kgmt8\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.435552 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2pjq"] Feb 28 03:37:47 crc kubenswrapper[4624]: W0228 03:37:47.454183 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51920ae4_b602_4113_b233_57fdef96cd52.slice/crio-7625a9f9d3ea6f21786b3fa7838aa3ff99737d63733c90be636f660c7f60ad34 WatchSource:0}: Error finding container 7625a9f9d3ea6f21786b3fa7838aa3ff99737d63733c90be636f660c7f60ad34: Status 404 returned error can't find the container with id 7625a9f9d3ea6f21786b3fa7838aa3ff99737d63733c90be636f660c7f60ad34 Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.523560 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.569244 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.569927 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.575514 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.575654 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.640929 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.646248 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerStarted","Data":"7625a9f9d3ea6f21786b3fa7838aa3ff99737d63733c90be636f660c7f60ad34"} Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.714279 4624 generic.go:334] "Generic (PLEG): container finished" podID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerID="099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41" exitCode=0 Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.714382 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerDied","Data":"099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41"} Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.723885 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.723978 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.734446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerStarted","Data":"f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881"} Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.734525 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerStarted","Data":"df6f3a08522e29fa0425b661b836b79ee8486faabc9ee7a67c1c49baea19a9b5"} Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.746388 4624 generic.go:334] "Generic (PLEG): container finished" podID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerID="7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e" exitCode=0 Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.747316 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n698" event={"ID":"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f","Type":"ContainerDied","Data":"7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e"} Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.747351 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n698" event={"ID":"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f","Type":"ContainerStarted","Data":"42783a68ac1de4ef621d239e5c6a2cb485afe4af4e981879e51fb240f327eea3"} Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.829320 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:47 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:47 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:47 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.829370 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.830420 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.830465 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.830528 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.888484 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:47 crc kubenswrapper[4624]: I0228 03:37:47.896944 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.121288 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282dd2f7-31bf-4b54-96c4-5c3dc1e834ba" path="/var/lib/kubelet/pods/282dd2f7-31bf-4b54-96c4-5c3dc1e834ba/volumes" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.122278 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.129154 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28db6c5-346b-4b5a-be0d-0a0165ae4c8c" path="/var/lib/kubelet/pods/c28db6c5-346b-4b5a-be0d-0a0165ae4c8c/volumes" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.174929 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.189868 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tztw9" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.301123 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76c4c5df84-fdbzb"] Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.302506 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.303279 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5lxr"] Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.312833 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.313060 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.313744 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.314042 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.314255 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.317573 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.364760 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.427820 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c4c5df84-fdbzb"] Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.534853 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-client-ca\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.535234 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-proxy-ca-bundles\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.535351 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-config\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.535423 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e3b3a9-1673-4360-8fb3-69b270e42534-serving-cert\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.535499 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8p5k\" (UniqueName: \"kubernetes.io/projected/49e3b3a9-1673-4360-8fb3-69b270e42534-kube-api-access-s8p5k\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.638200 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-config\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.638250 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e3b3a9-1673-4360-8fb3-69b270e42534-serving-cert\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.638281 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8p5k\" (UniqueName: \"kubernetes.io/projected/49e3b3a9-1673-4360-8fb3-69b270e42534-kube-api-access-s8p5k\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.638322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-client-ca\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.638342 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-proxy-ca-bundles\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.645669 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-proxy-ca-bundles\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.648916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-client-ca\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.649231 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-config\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.670142 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e3b3a9-1673-4360-8fb3-69b270e42534-serving-cert\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.701302 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8p5k\" (UniqueName: \"kubernetes.io/projected/49e3b3a9-1673-4360-8fb3-69b270e42534-kube-api-access-s8p5k\") pod \"controller-manager-76c4c5df84-fdbzb\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.733234 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzbdn"] Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.734271 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.735647 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.750452 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.750689 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.768449 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.815331 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:48 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:48 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:48 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.816053 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.817856 4624 generic.go:334] "Generic (PLEG): container finished" podID="9a269916-9894-4dcf-99db-7df5a1791898" containerID="f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881" exitCode=0 Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.818022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerDied","Data":"f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881"} Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.828512 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5lxr" event={"ID":"39526829-389b-49e1-8a31-5fee6a4ffa8f","Type":"ContainerStarted","Data":"e9164fdeb16f2fd25518c698d935bdb57e7a251230ff4c7c94c47c2bb2f9f438"} Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.840459 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4730663-21f3-418a-8e5b-2810333a0686-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.840570 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4730663-21f3-418a-8e5b-2810333a0686-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.867700 4624 generic.go:334] "Generic (PLEG): container finished" podID="51920ae4-b602-4113-b233-57fdef96cd52" containerID="5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9" exitCode=0 Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.861711 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x54xc"] Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.872198 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerDied","Data":"5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9"} Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.873483 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" event={"ID":"5823705e-af27-4b37-98f8-f73d31f69e02","Type":"ContainerStarted","Data":"1042499339faec1765938add0c64281dfed899be18044873b1da8f6f0293d1ad"} Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.945230 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4730663-21f3-418a-8e5b-2810333a0686-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.945697 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4730663-21f3-418a-8e5b-2810333a0686-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.946935 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4730663-21f3-418a-8e5b-2810333a0686-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:48 crc kubenswrapper[4624]: I0228 03:37:48.991462 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4730663-21f3-418a-8e5b-2810333a0686-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.005905 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.032619 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kgmt8"] Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.103911 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:49 crc kubenswrapper[4624]: W0228 03:37:49.135247 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02145e1a_bf6e_41a9_ac4c_a8fa7b186414.slice/crio-2d85e4d49297582c6b5c25826fa51a56a6066ea1691f27e4d9e2b4c4bbce5d92 WatchSource:0}: Error finding container 2d85e4d49297582c6b5c25826fa51a56a6066ea1691f27e4d9e2b4c4bbce5d92: Status 404 returned error can't find the container with id 2d85e4d49297582c6b5c25826fa51a56a6066ea1691f27e4d9e2b4c4bbce5d92 Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.253974 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9"] Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.370999 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.625620 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c4c5df84-fdbzb"] Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.902990 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:49 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:49 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:49 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.903440 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.960831 4624 generic.go:334] "Generic (PLEG): container finished" podID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerID="2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a" exitCode=0 Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.961011 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerDied","Data":"2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a"} Feb 28 03:37:49 crc kubenswrapper[4624]: I0228 03:37:49.961397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerStarted","Data":"b6019718072c07f346bf0c94b753e8ec9f7357599ff29647810abef73960a5b8"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.029772 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" event={"ID":"5823705e-af27-4b37-98f8-f73d31f69e02","Type":"ContainerStarted","Data":"2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.039489 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.052844 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.099597 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" podStartSLOduration=65.099563583 podStartE2EDuration="1m5.099563583s" podCreationTimestamp="2026-02-28 03:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:50.091218097 +0000 UTC m=+124.755257406" watchObservedRunningTime="2026-02-28 03:37:50.099563583 +0000 UTC m=+124.763602892" Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.181312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" event={"ID":"49e3b3a9-1673-4360-8fb3-69b270e42534","Type":"ContainerStarted","Data":"afa56433bf64881e2b6d3c6fdb336e9662dbbfb7cfa26ce75017b1282cb875eb"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.182027 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerStarted","Data":"2d85e4d49297582c6b5c25826fa51a56a6066ea1691f27e4d9e2b4c4bbce5d92"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.186387 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a634e6e-c46d-4576-9c18-3e35d3fc3f42","Type":"ContainerStarted","Data":"604be69118c31538dfd5c1aca44e7e303c0cdc3b2e95af9343ebcb615058db2b"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.188920 4624 generic.go:334] "Generic (PLEG): container finished" podID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerID="12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b" exitCode=0 Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.189253 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5lxr" event={"ID":"39526829-389b-49e1-8a31-5fee6a4ffa8f","Type":"ContainerDied","Data":"12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.209634 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" event={"ID":"973e4692-3689-4011-94c4-06df1913c988","Type":"ContainerStarted","Data":"3828be0427beeff912c14f922a69a03f1b3e6df644b3d5e1ec85d62b1a73ba71"} Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.316204 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.347530 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.817179 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:50 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:50 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:50 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:50 crc kubenswrapper[4624]: I0228 03:37:50.817546 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.346876 4624 ???:1] "http: TLS handshake error from 192.168.126.11:39174: no serving certificate available for the kubelet" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.349429 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4730663-21f3-418a-8e5b-2810333a0686","Type":"ContainerStarted","Data":"4717b0c2198a7ff7862e17346095c77624874686576b90676533ca0b6de4ac73"} Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.398894 4624 generic.go:334] "Generic (PLEG): container finished" podID="11d82af0-7eea-4c15-af5d-e58d1a0b6721" containerID="e144a3f56cecfc8f771aa54c3eb17b7492d2659f7761160abe9f8827cf138078" exitCode=0 Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.399011 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" event={"ID":"11d82af0-7eea-4c15-af5d-e58d1a0b6721","Type":"ContainerDied","Data":"e144a3f56cecfc8f771aa54c3eb17b7492d2659f7761160abe9f8827cf138078"} Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.437637 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" event={"ID":"973e4692-3689-4011-94c4-06df1913c988","Type":"ContainerStarted","Data":"981945461f1cd64a9e930f1b8c2a2dd83f85829d8734e702286703c6569e7427"} Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.438990 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.449031 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.450894 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" event={"ID":"49e3b3a9-1673-4360-8fb3-69b270e42534","Type":"ContainerStarted","Data":"b661fd3108c3c72893dda4403b22d8522e1b8ef019133809e9107b457e7d5731"} Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.452094 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.453780 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.453762267 podStartE2EDuration="4.453762267s" podCreationTimestamp="2026-02-28 03:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:51.44944976 +0000 UTC m=+126.113489059" watchObservedRunningTime="2026-02-28 03:37:51.453762267 +0000 UTC m=+126.117801576" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.471390 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.480543 4624 generic.go:334] "Generic (PLEG): container finished" podID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerID="b602e5bf4ee47eeb456fe3ad3af7973eb8735d9188388235ecc79a2194e68a91" exitCode=0 Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.482284 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerDied","Data":"b602e5bf4ee47eeb456fe3ad3af7973eb8735d9188388235ecc79a2194e68a91"} Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.531671 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" podStartSLOduration=7.53163927 podStartE2EDuration="7.53163927s" podCreationTimestamp="2026-02-28 03:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:51.530783106 +0000 UTC m=+126.194822415" watchObservedRunningTime="2026-02-28 03:37:51.53163927 +0000 UTC m=+126.195678579" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.532112 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" podStartSLOduration=7.532108472 podStartE2EDuration="7.532108472s" podCreationTimestamp="2026-02-28 03:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:37:51.482484937 +0000 UTC m=+126.146524236" watchObservedRunningTime="2026-02-28 03:37:51.532108472 +0000 UTC m=+126.196147781" Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.811550 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:51 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:51 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:51 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:51 crc kubenswrapper[4624]: I0228 03:37:51.811640 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.550883 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4730663-21f3-418a-8e5b-2810333a0686" containerID="32870af0e302eaf7eb7c7ac733883d1efb9288e4d584c834a0e6889639f9b3ff" exitCode=0 Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.550971 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4730663-21f3-418a-8e5b-2810333a0686","Type":"ContainerDied","Data":"32870af0e302eaf7eb7c7ac733883d1efb9288e4d584c834a0e6889639f9b3ff"} Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.566068 4624 generic.go:334] "Generic (PLEG): container finished" podID="2a634e6e-c46d-4576-9c18-3e35d3fc3f42" containerID="a2b7869d128c14f93cda31dbd7a0e72813b360b22d0062404168f3ee064b71cb" exitCode=0 Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.566133 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a634e6e-c46d-4576-9c18-3e35d3fc3f42","Type":"ContainerDied","Data":"a2b7869d128c14f93cda31dbd7a0e72813b360b22d0062404168f3ee064b71cb"} Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.668810 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7jk4g" Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.815616 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:52 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:52 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:52 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:52 crc kubenswrapper[4624]: I0228 03:37:52.816072 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.095458 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.206256 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d82af0-7eea-4c15-af5d-e58d1a0b6721-secret-volume\") pod \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.206337 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d82af0-7eea-4c15-af5d-e58d1a0b6721-config-volume\") pod \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.206390 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj5t4\" (UniqueName: \"kubernetes.io/projected/11d82af0-7eea-4c15-af5d-e58d1a0b6721-kube-api-access-wj5t4\") pod \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\" (UID: \"11d82af0-7eea-4c15-af5d-e58d1a0b6721\") " Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.208194 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d82af0-7eea-4c15-af5d-e58d1a0b6721-config-volume" (OuterVolumeSpecName: "config-volume") pod "11d82af0-7eea-4c15-af5d-e58d1a0b6721" (UID: "11d82af0-7eea-4c15-af5d-e58d1a0b6721"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.218305 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d82af0-7eea-4c15-af5d-e58d1a0b6721-kube-api-access-wj5t4" (OuterVolumeSpecName: "kube-api-access-wj5t4") pod "11d82af0-7eea-4c15-af5d-e58d1a0b6721" (UID: "11d82af0-7eea-4c15-af5d-e58d1a0b6721"). InnerVolumeSpecName "kube-api-access-wj5t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.218385 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d82af0-7eea-4c15-af5d-e58d1a0b6721-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11d82af0-7eea-4c15-af5d-e58d1a0b6721" (UID: "11d82af0-7eea-4c15-af5d-e58d1a0b6721"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.308281 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d82af0-7eea-4c15-af5d-e58d1a0b6721-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.308328 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d82af0-7eea-4c15-af5d-e58d1a0b6721-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.308340 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj5t4\" (UniqueName: \"kubernetes.io/projected/11d82af0-7eea-4c15-af5d-e58d1a0b6721-kube-api-access-wj5t4\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.612618 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.612836 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd" event={"ID":"11d82af0-7eea-4c15-af5d-e58d1a0b6721","Type":"ContainerDied","Data":"6d412b7b62a3c29bbe1aef7e50ac7f83b967ba153d81e65a6b048bbc4d394ee3"} Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.614244 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d412b7b62a3c29bbe1aef7e50ac7f83b967ba153d81e65a6b048bbc4d394ee3" Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.808097 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:53 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:53 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:53 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:53 crc kubenswrapper[4624]: I0228 03:37:53.808874 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:53 crc kubenswrapper[4624]: E0228 03:37:53.887459 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:37:53 crc kubenswrapper[4624]: E0228 03:37:53.922879 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:37:53 crc kubenswrapper[4624]: E0228 03:37:53.932136 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:37:53 crc kubenswrapper[4624]: E0228 03:37:53.932480 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.082711 4624 patch_prober.go:28] interesting pod/console-f9d7485db-ssl5n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.082826 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ssl5n" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.286784 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.349770 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4730663-21f3-418a-8e5b-2810333a0686-kubelet-dir\") pod \"f4730663-21f3-418a-8e5b-2810333a0686\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.349927 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4730663-21f3-418a-8e5b-2810333a0686-kube-api-access\") pod \"f4730663-21f3-418a-8e5b-2810333a0686\" (UID: \"f4730663-21f3-418a-8e5b-2810333a0686\") " Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.350858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4730663-21f3-418a-8e5b-2810333a0686-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4730663-21f3-418a-8e5b-2810333a0686" (UID: "f4730663-21f3-418a-8e5b-2810333a0686"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.375314 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4730663-21f3-418a-8e5b-2810333a0686-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4730663-21f3-418a-8e5b-2810333a0686" (UID: "f4730663-21f3-418a-8e5b-2810333a0686"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.406065 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.406152 4624 patch_prober.go:28] interesting pod/downloads-7954f5f757-psbkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.406157 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.406212 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psbkg" podUID="299baa07-011e-4629-808b-f86667b5cd82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.458049 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4730663-21f3-418a-8e5b-2810333a0686-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.458114 4624 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4730663-21f3-418a-8e5b-2810333a0686-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.530532 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.652128 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a634e6e-c46d-4576-9c18-3e35d3fc3f42","Type":"ContainerDied","Data":"604be69118c31538dfd5c1aca44e7e303c0cdc3b2e95af9343ebcb615058db2b"} Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.652187 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604be69118c31538dfd5c1aca44e7e303c0cdc3b2e95af9343ebcb615058db2b" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.652239 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.663379 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kube-api-access\") pod \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.663435 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kubelet-dir\") pod \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\" (UID: \"2a634e6e-c46d-4576-9c18-3e35d3fc3f42\") " Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.664215 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a634e6e-c46d-4576-9c18-3e35d3fc3f42" (UID: "2a634e6e-c46d-4576-9c18-3e35d3fc3f42"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.668680 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a634e6e-c46d-4576-9c18-3e35d3fc3f42" (UID: "2a634e6e-c46d-4576-9c18-3e35d3fc3f42"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.719151 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4730663-21f3-418a-8e5b-2810333a0686","Type":"ContainerDied","Data":"4717b0c2198a7ff7862e17346095c77624874686576b90676533ca0b6de4ac73"} Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.719212 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4717b0c2198a7ff7862e17346095c77624874686576b90676533ca0b6de4ac73" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.719295 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.765638 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.765686 4624 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a634e6e-c46d-4576-9c18-3e35d3fc3f42-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.808398 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:54 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:54 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:54 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:54 crc kubenswrapper[4624]: I0228 03:37:54.808491 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:55 crc kubenswrapper[4624]: I0228 03:37:55.808596 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:55 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:55 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:55 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:55 crc kubenswrapper[4624]: I0228 03:37:55.809017 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:56 crc kubenswrapper[4624]: I0228 03:37:56.137760 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 03:37:56 crc kubenswrapper[4624]: I0228 03:37:56.806956 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:56 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:56 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:56 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:56 crc kubenswrapper[4624]: I0228 03:37:56.807049 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:57 crc kubenswrapper[4624]: E0228 03:37:57.454536 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f17b2_3180_41e3_a8cf_1f40338eadf0.slice/crio-7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice\": RecentStats: unable to find data in memory cache]" Feb 28 03:37:57 crc kubenswrapper[4624]: I0228 03:37:57.807123 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:57 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:57 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:57 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:57 crc kubenswrapper[4624]: I0228 03:37:57.808426 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:58 crc kubenswrapper[4624]: I0228 03:37:58.806311 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:58 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:58 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:58 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:58 crc kubenswrapper[4624]: I0228 03:37:58.806370 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:37:59 crc kubenswrapper[4624]: I0228 03:37:59.806578 4624 patch_prober.go:28] interesting pod/router-default-5444994796-pcq7q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 03:37:59 crc kubenswrapper[4624]: [-]has-synced failed: reason withheld Feb 28 03:37:59 crc kubenswrapper[4624]: [+]process-running ok Feb 28 03:37:59 crc kubenswrapper[4624]: healthz check failed Feb 28 03:37:59 crc kubenswrapper[4624]: I0228 03:37:59.806656 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pcq7q" podUID="940da15d-4365-40e8-9f00-33fecfb1e6c6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.155439 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537498-x9wgz"] Feb 28 03:38:00 crc kubenswrapper[4624]: E0228 03:38:00.155814 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d82af0-7eea-4c15-af5d-e58d1a0b6721" containerName="collect-profiles" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.155836 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d82af0-7eea-4c15-af5d-e58d1a0b6721" containerName="collect-profiles" Feb 28 03:38:00 crc kubenswrapper[4624]: E0228 03:38:00.155847 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4730663-21f3-418a-8e5b-2810333a0686" containerName="pruner" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.155853 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4730663-21f3-418a-8e5b-2810333a0686" containerName="pruner" Feb 28 03:38:00 crc kubenswrapper[4624]: E0228 03:38:00.155873 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a634e6e-c46d-4576-9c18-3e35d3fc3f42" containerName="pruner" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.155880 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a634e6e-c46d-4576-9c18-3e35d3fc3f42" containerName="pruner" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.156004 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4730663-21f3-418a-8e5b-2810333a0686" containerName="pruner" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.156018 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d82af0-7eea-4c15-af5d-e58d1a0b6721" containerName="collect-profiles" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.156032 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a634e6e-c46d-4576-9c18-3e35d3fc3f42" containerName="pruner" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.156614 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.160252 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.161318 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537498-x9wgz"] Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.161428 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.165177 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.211343 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.211317373 podStartE2EDuration="4.211317373s" podCreationTimestamp="2026-02-28 03:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:00.20755602 +0000 UTC m=+134.871595329" watchObservedRunningTime="2026-02-28 03:38:00.211317373 +0000 UTC m=+134.875356682" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.297075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpfb\" (UniqueName: \"kubernetes.io/projected/70fd0c80-14b5-4af4-bc5a-7ca64460bc65-kube-api-access-kxpfb\") pod \"auto-csr-approver-29537498-x9wgz\" (UID: \"70fd0c80-14b5-4af4-bc5a-7ca64460bc65\") " pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.304940 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c4c5df84-fdbzb"] Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.305294 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" podUID="49e3b3a9-1673-4360-8fb3-69b270e42534" containerName="controller-manager" containerID="cri-o://b661fd3108c3c72893dda4403b22d8522e1b8ef019133809e9107b457e7d5731" gracePeriod=30 Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.318650 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9"] Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.319351 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" podUID="973e4692-3689-4011-94c4-06df1913c988" containerName="route-controller-manager" containerID="cri-o://981945461f1cd64a9e930f1b8c2a2dd83f85829d8734e702286703c6569e7427" gracePeriod=30 Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.398588 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpfb\" (UniqueName: \"kubernetes.io/projected/70fd0c80-14b5-4af4-bc5a-7ca64460bc65-kube-api-access-kxpfb\") pod \"auto-csr-approver-29537498-x9wgz\" (UID: \"70fd0c80-14b5-4af4-bc5a-7ca64460bc65\") " pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.438965 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpfb\" (UniqueName: \"kubernetes.io/projected/70fd0c80-14b5-4af4-bc5a-7ca64460bc65-kube-api-access-kxpfb\") pod \"auto-csr-approver-29537498-x9wgz\" (UID: \"70fd0c80-14b5-4af4-bc5a-7ca64460bc65\") " pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.490717 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.806413 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.808458 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pcq7q" Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.907562 4624 generic.go:334] "Generic (PLEG): container finished" podID="973e4692-3689-4011-94c4-06df1913c988" containerID="981945461f1cd64a9e930f1b8c2a2dd83f85829d8734e702286703c6569e7427" exitCode=0 Feb 28 03:38:00 crc kubenswrapper[4624]: I0228 03:38:00.907666 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" event={"ID":"973e4692-3689-4011-94c4-06df1913c988","Type":"ContainerDied","Data":"981945461f1cd64a9e930f1b8c2a2dd83f85829d8734e702286703c6569e7427"} Feb 28 03:38:01 crc kubenswrapper[4624]: I0228 03:38:01.936818 4624 generic.go:334] "Generic (PLEG): container finished" podID="49e3b3a9-1673-4360-8fb3-69b270e42534" containerID="b661fd3108c3c72893dda4403b22d8522e1b8ef019133809e9107b457e7d5731" exitCode=0 Feb 28 03:38:01 crc kubenswrapper[4624]: I0228 03:38:01.936903 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" event={"ID":"49e3b3a9-1673-4360-8fb3-69b270e42534","Type":"ContainerDied","Data":"b661fd3108c3c72893dda4403b22d8522e1b8ef019133809e9107b457e7d5731"} Feb 28 03:38:03 crc kubenswrapper[4624]: E0228 03:38:03.882764 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:03 crc kubenswrapper[4624]: E0228 03:38:03.913486 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:03 crc kubenswrapper[4624]: E0228 03:38:03.918464 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:03 crc kubenswrapper[4624]: E0228 03:38:03.918503 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:38:04 crc kubenswrapper[4624]: I0228 03:38:04.101899 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:38:04 crc kubenswrapper[4624]: I0228 03:38:04.114661 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:38:04 crc kubenswrapper[4624]: I0228 03:38:04.412156 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-psbkg" Feb 28 03:38:06 crc kubenswrapper[4624]: I0228 03:38:06.743506 4624 patch_prober.go:28] interesting pod/route-controller-manager-5b479ddd7f-nppn9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Feb 28 03:38:06 crc kubenswrapper[4624]: I0228 03:38:06.743593 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" podUID="973e4692-3689-4011-94c4-06df1913c988" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Feb 28 03:38:07 crc kubenswrapper[4624]: I0228 03:38:07.247597 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:38:07 crc kubenswrapper[4624]: E0228 03:38:07.601857 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f17b2_3180_41e3_a8cf_1f40338eadf0.slice/crio-7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f\": RecentStats: unable to find data in memory cache]" Feb 28 03:38:09 crc kubenswrapper[4624]: I0228 03:38:09.108060 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 28 03:38:09 crc kubenswrapper[4624]: I0228 03:38:09.770887 4624 patch_prober.go:28] interesting pod/controller-manager-76c4c5df84-fdbzb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:38:09 crc kubenswrapper[4624]: I0228 03:38:09.771411 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" podUID="49e3b3a9-1673-4360-8fb3-69b270e42534" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:38:11 crc kubenswrapper[4624]: I0228 03:38:11.044483 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hztbp_da0a1c2f-39f4-4a46-a1ff-1355c393f3c6/kube-multus-additional-cni-plugins/0.log" Feb 28 03:38:11 crc kubenswrapper[4624]: I0228 03:38:11.044543 4624 generic.go:334] "Generic (PLEG): container finished" podID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" exitCode=137 Feb 28 03:38:11 crc kubenswrapper[4624]: I0228 03:38:11.044595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" event={"ID":"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6","Type":"ContainerDied","Data":"c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be"} Feb 28 03:38:11 crc kubenswrapper[4624]: I0228 03:38:11.868963 4624 ???:1] "http: TLS handshake error from 192.168.126.11:56714: no serving certificate available for the kubelet" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.392853 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.401199 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.428958 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8574f8b9f-bp8qf"] Feb 28 03:38:13 crc kubenswrapper[4624]: E0228 03:38:13.429282 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973e4692-3689-4011-94c4-06df1913c988" containerName="route-controller-manager" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.429302 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="973e4692-3689-4011-94c4-06df1913c988" containerName="route-controller-manager" Feb 28 03:38:13 crc kubenswrapper[4624]: E0228 03:38:13.429318 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e3b3a9-1673-4360-8fb3-69b270e42534" containerName="controller-manager" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.429329 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e3b3a9-1673-4360-8fb3-69b270e42534" containerName="controller-manager" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.429491 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e3b3a9-1673-4360-8fb3-69b270e42534" containerName="controller-manager" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.429507 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="973e4692-3689-4011-94c4-06df1913c988" containerName="route-controller-manager" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.430478 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.462105 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=4.462064708 podStartE2EDuration="4.462064708s" podCreationTimestamp="2026-02-28 03:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:13.45622869 +0000 UTC m=+148.120267999" watchObservedRunningTime="2026-02-28 03:38:13.462064708 +0000 UTC m=+148.126104017" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.463671 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8574f8b9f-bp8qf"] Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.520236 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-config\") pod \"973e4692-3689-4011-94c4-06df1913c988\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.520391 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-config\") pod \"49e3b3a9-1673-4360-8fb3-69b270e42534\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.520420 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/973e4692-3689-4011-94c4-06df1913c988-serving-cert\") pod \"973e4692-3689-4011-94c4-06df1913c988\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.520520 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-client-ca\") pod \"973e4692-3689-4011-94c4-06df1913c988\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.522002 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-config" (OuterVolumeSpecName: "config") pod "49e3b3a9-1673-4360-8fb3-69b270e42534" (UID: "49e3b3a9-1673-4360-8fb3-69b270e42534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.522601 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-client-ca" (OuterVolumeSpecName: "client-ca") pod "973e4692-3689-4011-94c4-06df1913c988" (UID: "973e4692-3689-4011-94c4-06df1913c988"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.523327 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8p5k\" (UniqueName: \"kubernetes.io/projected/49e3b3a9-1673-4360-8fb3-69b270e42534-kube-api-access-s8p5k\") pod \"49e3b3a9-1673-4360-8fb3-69b270e42534\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.523469 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-client-ca\") pod \"49e3b3a9-1673-4360-8fb3-69b270e42534\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524191 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e3b3a9-1673-4360-8fb3-69b270e42534-serving-cert\") pod \"49e3b3a9-1673-4360-8fb3-69b270e42534\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524224 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbmbd\" (UniqueName: \"kubernetes.io/projected/973e4692-3689-4011-94c4-06df1913c988-kube-api-access-kbmbd\") pod \"973e4692-3689-4011-94c4-06df1913c988\" (UID: \"973e4692-3689-4011-94c4-06df1913c988\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524266 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-proxy-ca-bundles\") pod \"49e3b3a9-1673-4360-8fb3-69b270e42534\" (UID: \"49e3b3a9-1673-4360-8fb3-69b270e42534\") " Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524397 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-client-ca" (OuterVolumeSpecName: "client-ca") pod "49e3b3a9-1673-4360-8fb3-69b270e42534" (UID: "49e3b3a9-1673-4360-8fb3-69b270e42534"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524639 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzbt\" (UniqueName: \"kubernetes.io/projected/cef87e72-b520-465e-b2da-72352cd252c9-kube-api-access-5dzbt\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524694 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-proxy-ca-bundles\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524915 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "49e3b3a9-1673-4360-8fb3-69b270e42534" (UID: "49e3b3a9-1673-4360-8fb3-69b270e42534"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.524963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-config\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.525010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-client-ca\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.525191 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef87e72-b520-465e-b2da-72352cd252c9-serving-cert\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.525380 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.525395 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.525404 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.525417 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e3b3a9-1673-4360-8fb3-69b270e42534-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.528034 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-config" (OuterVolumeSpecName: "config") pod "973e4692-3689-4011-94c4-06df1913c988" (UID: "973e4692-3689-4011-94c4-06df1913c988"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.531277 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e3b3a9-1673-4360-8fb3-69b270e42534-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49e3b3a9-1673-4360-8fb3-69b270e42534" (UID: "49e3b3a9-1673-4360-8fb3-69b270e42534"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.531302 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973e4692-3689-4011-94c4-06df1913c988-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "973e4692-3689-4011-94c4-06df1913c988" (UID: "973e4692-3689-4011-94c4-06df1913c988"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.531332 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e3b3a9-1673-4360-8fb3-69b270e42534-kube-api-access-s8p5k" (OuterVolumeSpecName: "kube-api-access-s8p5k") pod "49e3b3a9-1673-4360-8fb3-69b270e42534" (UID: "49e3b3a9-1673-4360-8fb3-69b270e42534"). InnerVolumeSpecName "kube-api-access-s8p5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.543283 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973e4692-3689-4011-94c4-06df1913c988-kube-api-access-kbmbd" (OuterVolumeSpecName: "kube-api-access-kbmbd") pod "973e4692-3689-4011-94c4-06df1913c988" (UID: "973e4692-3689-4011-94c4-06df1913c988"). InnerVolumeSpecName "kube-api-access-kbmbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626776 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-config\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626822 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-client-ca\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626860 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef87e72-b520-465e-b2da-72352cd252c9-serving-cert\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzbt\" (UniqueName: \"kubernetes.io/projected/cef87e72-b520-465e-b2da-72352cd252c9-kube-api-access-5dzbt\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626925 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-proxy-ca-bundles\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626968 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbmbd\" (UniqueName: \"kubernetes.io/projected/973e4692-3689-4011-94c4-06df1913c988-kube-api-access-kbmbd\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626979 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973e4692-3689-4011-94c4-06df1913c988-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.626989 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/973e4692-3689-4011-94c4-06df1913c988-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.627000 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8p5k\" (UniqueName: \"kubernetes.io/projected/49e3b3a9-1673-4360-8fb3-69b270e42534-kube-api-access-s8p5k\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.627008 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e3b3a9-1673-4360-8fb3-69b270e42534-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.628665 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-client-ca\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.628696 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-proxy-ca-bundles\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.630065 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-config\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.636012 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef87e72-b520-465e-b2da-72352cd252c9-serving-cert\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.645719 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzbt\" (UniqueName: \"kubernetes.io/projected/cef87e72-b520-465e-b2da-72352cd252c9-kube-api-access-5dzbt\") pod \"controller-manager-8574f8b9f-bp8qf\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: I0228 03:38:13.790159 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:13 crc kubenswrapper[4624]: E0228 03:38:13.864684 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:13 crc kubenswrapper[4624]: E0228 03:38:13.865010 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:13 crc kubenswrapper[4624]: E0228 03:38:13.865362 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:13 crc kubenswrapper[4624]: E0228 03:38:13.865394 4624 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.079394 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" event={"ID":"973e4692-3689-4011-94c4-06df1913c988","Type":"ContainerDied","Data":"3828be0427beeff912c14f922a69a03f1b3e6df644b3d5e1ec85d62b1a73ba71"} Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.079820 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9" Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.080017 4624 scope.go:117] "RemoveContainer" containerID="981945461f1cd64a9e930f1b8c2a2dd83f85829d8734e702286703c6569e7427" Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.082735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" event={"ID":"49e3b3a9-1673-4360-8fb3-69b270e42534","Type":"ContainerDied","Data":"afa56433bf64881e2b6d3c6fdb336e9662dbbfb7cfa26ce75017b1282cb875eb"} Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.082801 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c4c5df84-fdbzb" Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.148213 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9"] Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.151074 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b479ddd7f-nppn9"] Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.153850 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c4c5df84-fdbzb"] Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.157391 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76c4c5df84-fdbzb"] Feb 28 03:38:14 crc kubenswrapper[4624]: I0228 03:38:14.289537 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ck5wp" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.275642 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e3b3a9-1673-4360-8fb3-69b270e42534" path="/var/lib/kubelet/pods/49e3b3a9-1673-4360-8fb3-69b270e42534/volumes" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.277118 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973e4692-3689-4011-94c4-06df1913c988" path="/var/lib/kubelet/pods/973e4692-3689-4011-94c4-06df1913c988/volumes" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.288967 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4"] Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.290642 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.296470 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.296775 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.297385 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.296911 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.299021 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.303203 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4"] Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.322453 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.376574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-config\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.376676 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-client-ca\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.376738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-serving-cert\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.376782 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp55l\" (UniqueName: \"kubernetes.io/projected/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-kube-api-access-wp55l\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.478235 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-client-ca\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.478306 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-serving-cert\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.478358 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp55l\" (UniqueName: \"kubernetes.io/projected/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-kube-api-access-wp55l\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.478402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-config\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.479548 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-client-ca\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.480060 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-config\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.499479 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-serving-cert\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.516131 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp55l\" (UniqueName: \"kubernetes.io/projected/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-kube-api-access-wp55l\") pod \"route-controller-manager-7487f8fd6f-xn6m4\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.634935 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pcvf9"] Feb 28 03:38:16 crc kubenswrapper[4624]: I0228 03:38:16.639511 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:17 crc kubenswrapper[4624]: E0228 03:38:17.727221 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f17b2_3180_41e3_a8cf_1f40338eadf0.slice/crio-7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice\": RecentStats: unable to find data in memory cache]" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.748188 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.750493 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.750795 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.768594 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.769391 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.837884 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4404cbca-d669-484a-8354-fc91a39103d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.837955 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4404cbca-d669-484a-8354-fc91a39103d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.939208 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4404cbca-d669-484a-8354-fc91a39103d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.939341 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4404cbca-d669-484a-8354-fc91a39103d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.939438 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4404cbca-d669-484a-8354-fc91a39103d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:18 crc kubenswrapper[4624]: I0228 03:38:18.957742 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4404cbca-d669-484a-8354-fc91a39103d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:19 crc kubenswrapper[4624]: I0228 03:38:19.097492 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:20 crc kubenswrapper[4624]: I0228 03:38:20.226217 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8574f8b9f-bp8qf"] Feb 28 03:38:20 crc kubenswrapper[4624]: I0228 03:38:20.322221 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4"] Feb 28 03:38:21 crc kubenswrapper[4624]: I0228 03:38:21.550875 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 03:38:22 crc kubenswrapper[4624]: E0228 03:38:22.964067 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 03:38:22 crc kubenswrapper[4624]: E0228 03:38:22.964717 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msnmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-x54xc_openshift-marketplace(69a0ae1a-bcd4-41f5-af2c-07aebcb45296): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:22 crc kubenswrapper[4624]: E0228 03:38:22.965966 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-x54xc" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.030711 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.030912 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48dcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kgmt8_openshift-marketplace(02145e1a-bf6e-41a9-ac4c-a8fa7b186414): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.032676 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kgmt8" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.732285 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.733299 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.739965 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.824502 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-var-lock\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.824571 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kube-api-access\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.824601 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.864175 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.864897 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.865182 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 28 03:38:23 crc kubenswrapper[4624]: E0228 03:38:23.865248 4624 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.925652 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.925758 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-var-lock\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.925794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kube-api-access\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.925910 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-var-lock\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.925973 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:23 crc kubenswrapper[4624]: I0228 03:38:23.945172 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kube-api-access\") pod \"installer-9-crc\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:24 crc kubenswrapper[4624]: I0228 03:38:24.071062 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:38:25 crc kubenswrapper[4624]: E0228 03:38:25.448348 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kgmt8" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" Feb 28 03:38:25 crc kubenswrapper[4624]: E0228 03:38:25.449006 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-x54xc" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" Feb 28 03:38:25 crc kubenswrapper[4624]: E0228 03:38:25.816051 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 03:38:25 crc kubenswrapper[4624]: E0228 03:38:25.816687 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwb82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v2pjq_openshift-marketplace(51920ae4-b602-4113-b233-57fdef96cd52): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:25 crc kubenswrapper[4624]: E0228 03:38:25.820601 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v2pjq" podUID="51920ae4-b602-4113-b233-57fdef96cd52" Feb 28 03:38:26 crc kubenswrapper[4624]: E0228 03:38:26.448481 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 03:38:26 crc kubenswrapper[4624]: E0228 03:38:26.448781 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znckv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-h895m_openshift-marketplace(9a269916-9894-4dcf-99db-7df5a1791898): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:26 crc kubenswrapper[4624]: E0228 03:38:26.450178 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-h895m" podUID="9a269916-9894-4dcf-99db-7df5a1791898" Feb 28 03:38:27 crc kubenswrapper[4624]: E0228 03:38:27.852209 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f17b2_3180_41e3_a8cf_1f40338eadf0.slice/crio-7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76.scope\": RecentStats: unable to find data in memory cache]" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.131391 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-h895m" podUID="9a269916-9894-4dcf-99db-7df5a1791898" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.131422 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v2pjq" podUID="51920ae4-b602-4113-b233-57fdef96cd52" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.188950 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hztbp_da0a1c2f-39f4-4a46-a1ff-1355c393f3c6/kube-multus-additional-cni-plugins/0.log" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.189057 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.227863 4624 scope.go:117] "RemoveContainer" containerID="b661fd3108c3c72893dda4403b22d8522e1b8ef019133809e9107b457e7d5731" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.247450 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.247608 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rx54v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qw86g_openshift-marketplace(1f8aeb46-02be-4b30-abdb-7c378da509ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.249438 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qw86g" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.253032 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.253216 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxdpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r5lxr_openshift-marketplace(39526829-389b-49e1-8a31-5fee6a4ffa8f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.254293 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r5lxr" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.270462 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.270787 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rw2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jjcv7_openshift-marketplace(cd7f17b2-3180-41e3-a8cf-1f40338eadf0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.273721 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jjcv7" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.309165 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-tuning-conf-dir\") pod \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.309238 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-ready\") pod \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.309285 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgsl\" (UniqueName: \"kubernetes.io/projected/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-kube-api-access-phgsl\") pod \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.309426 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-cni-sysctl-allowlist\") pod \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\" (UID: \"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6\") " Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.310594 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" (UID: "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.311061 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-ready" (OuterVolumeSpecName: "ready") pod "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" (UID: "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.311698 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" (UID: "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.322075 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-kube-api-access-phgsl" (OuterVolumeSpecName: "kube-api-access-phgsl") pod "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" (UID: "da0a1c2f-39f4-4a46-a1ff-1355c393f3c6"). InnerVolumeSpecName "kube-api-access-phgsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.413134 4624 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.413173 4624 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.413183 4624 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-ready\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.413192 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgsl\" (UniqueName: \"kubernetes.io/projected/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6-kube-api-access-phgsl\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.429336 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hztbp_da0a1c2f-39f4-4a46-a1ff-1355c393f3c6/kube-multus-additional-cni-plugins/0.log" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.429704 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" event={"ID":"da0a1c2f-39f4-4a46-a1ff-1355c393f3c6","Type":"ContainerDied","Data":"6c2589090ac8cfd1d8dbaabf6275e329f33e351f9870eafb9dd733d0ade63e2a"} Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.429764 4624 scope.go:117] "RemoveContainer" containerID="c3b879adf4150a7f41d97dc1e36ba586f663d3c6fbc7a225f584d44a9a9293be" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.429868 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hztbp" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.436464 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qw86g" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.442513 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jjcv7" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" Feb 28 03:38:28 crc kubenswrapper[4624]: E0228 03:38:28.448753 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r5lxr" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.600230 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hztbp"] Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.604058 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hztbp"] Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.927522 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537498-x9wgz"] Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.938529 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8574f8b9f-bp8qf"] Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.947521 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 03:38:28 crc kubenswrapper[4624]: W0228 03:38:28.965333 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4404cbca_d669_484a_8354_fc91a39103d3.slice/crio-18c57a824ea0b6fd559f54d213717ef0af6e3b24566934f03ea1eb10de90ca8f WatchSource:0}: Error finding container 18c57a824ea0b6fd559f54d213717ef0af6e3b24566934f03ea1eb10de90ca8f: Status 404 returned error can't find the container with id 18c57a824ea0b6fd559f54d213717ef0af6e3b24566934f03ea1eb10de90ca8f Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.982620 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4"] Feb 28 03:38:28 crc kubenswrapper[4624]: I0228 03:38:28.999012 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 03:38:29 crc kubenswrapper[4624]: W0228 03:38:29.015039 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6fbb0bd0_131a_412e_abb0_040e4e5ebf10.slice/crio-0e1d87e4cbd5567396c2fc470c9030cdd3b24b0aab5764923dec3c41ca07939b WatchSource:0}: Error finding container 0e1d87e4cbd5567396c2fc470c9030cdd3b24b0aab5764923dec3c41ca07939b: Status 404 returned error can't find the container with id 0e1d87e4cbd5567396c2fc470c9030cdd3b24b0aab5764923dec3c41ca07939b Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.544001 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" event={"ID":"cef87e72-b520-465e-b2da-72352cd252c9","Type":"ContainerStarted","Data":"e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.544738 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" event={"ID":"cef87e72-b520-465e-b2da-72352cd252c9","Type":"ContainerStarted","Data":"cbf0c62f7499838bfadf3ce87743bc5a4044deb862794ae2137ba6be516c7d8f"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.544828 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.544374 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" podUID="cef87e72-b520-465e-b2da-72352cd252c9" containerName="controller-manager" containerID="cri-o://e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b" gracePeriod=30 Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.558675 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.563611 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4404cbca-d669-484a-8354-fc91a39103d3","Type":"ContainerStarted","Data":"18c57a824ea0b6fd559f54d213717ef0af6e3b24566934f03ea1eb10de90ca8f"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.567046 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" event={"ID":"1331a3bd-c6c9-4357-83e7-9a37bccdaf44","Type":"ContainerStarted","Data":"cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.567236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" event={"ID":"1331a3bd-c6c9-4357-83e7-9a37bccdaf44","Type":"ContainerStarted","Data":"dbd82a90c80a0af9214c24c70f4fb6d51dd228e2a72bdf3ebb3ad33de3cddcbd"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.574353 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" podUID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" containerName="route-controller-manager" containerID="cri-o://cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35" gracePeriod=30 Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.575396 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.587144 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6fbb0bd0-131a-412e-abb0-040e4e5ebf10","Type":"ContainerStarted","Data":"0e1d87e4cbd5567396c2fc470c9030cdd3b24b0aab5764923dec3c41ca07939b"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.591198 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" podStartSLOduration=29.591168771 podStartE2EDuration="29.591168771s" podCreationTimestamp="2026-02-28 03:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:29.575687681 +0000 UTC m=+164.239726990" watchObservedRunningTime="2026-02-28 03:38:29.591168771 +0000 UTC m=+164.255208080" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.593847 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" event={"ID":"70fd0c80-14b5-4af4-bc5a-7ca64460bc65","Type":"ContainerStarted","Data":"2215d1e8398b9e4fa142870ec52839e546dbeb9f63d71ae53536d79c03ada835"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.597782 4624 generic.go:334] "Generic (PLEG): container finished" podID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerID="aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7" exitCode=0 Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.597994 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n698" event={"ID":"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f","Type":"ContainerDied","Data":"aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7"} Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.627013 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" podStartSLOduration=29.626990663 podStartE2EDuration="29.626990663s" podCreationTimestamp="2026-02-28 03:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:29.624235468 +0000 UTC m=+164.288274777" watchObservedRunningTime="2026-02-28 03:38:29.626990663 +0000 UTC m=+164.291029962" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.694998 4624 patch_prober.go:28] interesting pod/route-controller-manager-7487f8fd6f-xn6m4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:54674->10.217.0.58:8443: read: connection reset by peer" start-of-body= Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.695056 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" podUID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:54674->10.217.0.58:8443: read: connection reset by peer" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.928828 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.931854 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-client-ca\") pod \"cef87e72-b520-465e-b2da-72352cd252c9\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.931904 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef87e72-b520-465e-b2da-72352cd252c9-serving-cert\") pod \"cef87e72-b520-465e-b2da-72352cd252c9\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.931936 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-proxy-ca-bundles\") pod \"cef87e72-b520-465e-b2da-72352cd252c9\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.931971 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dzbt\" (UniqueName: \"kubernetes.io/projected/cef87e72-b520-465e-b2da-72352cd252c9-kube-api-access-5dzbt\") pod \"cef87e72-b520-465e-b2da-72352cd252c9\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.932160 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-config\") pod \"cef87e72-b520-465e-b2da-72352cd252c9\" (UID: \"cef87e72-b520-465e-b2da-72352cd252c9\") " Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.932946 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cef87e72-b520-465e-b2da-72352cd252c9" (UID: "cef87e72-b520-465e-b2da-72352cd252c9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.932812 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "cef87e72-b520-465e-b2da-72352cd252c9" (UID: "cef87e72-b520-465e-b2da-72352cd252c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.933166 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-config" (OuterVolumeSpecName: "config") pod "cef87e72-b520-465e-b2da-72352cd252c9" (UID: "cef87e72-b520-465e-b2da-72352cd252c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.939784 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef87e72-b520-465e-b2da-72352cd252c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cef87e72-b520-465e-b2da-72352cd252c9" (UID: "cef87e72-b520-465e-b2da-72352cd252c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.940305 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7487f8fd6f-xn6m4_1331a3bd-c6c9-4357-83e7-9a37bccdaf44/route-controller-manager/0.log" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.940386 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:29 crc kubenswrapper[4624]: I0228 03:38:29.942255 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef87e72-b520-465e-b2da-72352cd252c9-kube-api-access-5dzbt" (OuterVolumeSpecName: "kube-api-access-5dzbt") pod "cef87e72-b520-465e-b2da-72352cd252c9" (UID: "cef87e72-b520-465e-b2da-72352cd252c9"). InnerVolumeSpecName "kube-api-access-5dzbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033014 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp55l\" (UniqueName: \"kubernetes.io/projected/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-kube-api-access-wp55l\") pod \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033102 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-client-ca\") pod \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033159 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-serving-cert\") pod \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033185 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-config\") pod \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\" (UID: \"1331a3bd-c6c9-4357-83e7-9a37bccdaf44\") " Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033342 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033356 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef87e72-b520-465e-b2da-72352cd252c9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033365 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033377 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dzbt\" (UniqueName: \"kubernetes.io/projected/cef87e72-b520-465e-b2da-72352cd252c9-kube-api-access-5dzbt\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.033385 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef87e72-b520-465e-b2da-72352cd252c9-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.034151 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-config" (OuterVolumeSpecName: "config") pod "1331a3bd-c6c9-4357-83e7-9a37bccdaf44" (UID: "1331a3bd-c6c9-4357-83e7-9a37bccdaf44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.035589 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-client-ca" (OuterVolumeSpecName: "client-ca") pod "1331a3bd-c6c9-4357-83e7-9a37bccdaf44" (UID: "1331a3bd-c6c9-4357-83e7-9a37bccdaf44"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.038897 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1331a3bd-c6c9-4357-83e7-9a37bccdaf44" (UID: "1331a3bd-c6c9-4357-83e7-9a37bccdaf44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.040398 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-kube-api-access-wp55l" (OuterVolumeSpecName: "kube-api-access-wp55l") pod "1331a3bd-c6c9-4357-83e7-9a37bccdaf44" (UID: "1331a3bd-c6c9-4357-83e7-9a37bccdaf44"). InnerVolumeSpecName "kube-api-access-wp55l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.096673 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" path="/var/lib/kubelet/pods/da0a1c2f-39f4-4a46-a1ff-1355c393f3c6/volumes" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.285074 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.285131 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.285154 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp55l\" (UniqueName: \"kubernetes.io/projected/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-kube-api-access-wp55l\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.285162 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1331a3bd-c6c9-4357-83e7-9a37bccdaf44-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.369594 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw"] Feb 28 03:38:30 crc kubenswrapper[4624]: E0228 03:38:30.369993 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" containerName="route-controller-manager" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.370011 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" containerName="route-controller-manager" Feb 28 03:38:30 crc kubenswrapper[4624]: E0228 03:38:30.370025 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.370032 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:38:30 crc kubenswrapper[4624]: E0228 03:38:30.370044 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef87e72-b520-465e-b2da-72352cd252c9" containerName="controller-manager" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.370051 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef87e72-b520-465e-b2da-72352cd252c9" containerName="controller-manager" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.370326 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0a1c2f-39f4-4a46-a1ff-1355c393f3c6" containerName="kube-multus-additional-cni-plugins" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.370392 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef87e72-b520-465e-b2da-72352cd252c9" containerName="controller-manager" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.370406 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" containerName="route-controller-manager" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.372090 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.372722 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.373212 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.504684 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-config\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.505040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9pp\" (UniqueName: \"kubernetes.io/projected/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-kube-api-access-bn9pp\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.506010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-client-ca\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.506146 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-proxy-ca-bundles\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.506295 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-serving-cert\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.547438 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.594385 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607490 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7487f8fd6f-xn6m4_1331a3bd-c6c9-4357-83e7-9a37bccdaf44/route-controller-manager/0.log" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607544 4624 generic.go:334] "Generic (PLEG): container finished" podID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" containerID="cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35" exitCode=255 Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607601 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf9j2\" (UniqueName: \"kubernetes.io/projected/1e0f2ed2-34e3-4d84-a543-898fa4baec08-kube-api-access-pf9j2\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607622 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" event={"ID":"1331a3bd-c6c9-4357-83e7-9a37bccdaf44","Type":"ContainerDied","Data":"cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35"} Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" event={"ID":"1331a3bd-c6c9-4357-83e7-9a37bccdaf44","Type":"ContainerDied","Data":"dbd82a90c80a0af9214c24c70f4fb6d51dd228e2a72bdf3ebb3ad33de3cddcbd"} Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607657 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-client-ca\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607672 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607732 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-proxy-ca-bundles\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607840 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-serving-cert\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607894 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0f2ed2-34e3-4d84-a543-898fa4baec08-serving-cert\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607967 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-config\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.607996 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-config\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.608027 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9pp\" (UniqueName: \"kubernetes.io/projected/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-kube-api-access-bn9pp\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.608060 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-client-ca\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.608254 4624 scope.go:117] "RemoveContainer" containerID="cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.609435 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-client-ca\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.610251 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6fbb0bd0-131a-412e-abb0-040e4e5ebf10","Type":"ContainerStarted","Data":"78ef91b035bcb58d965a489bdfa722f7d9013604ba2e139f75da3d572762dfb2"} Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.615807 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-proxy-ca-bundles\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.616473 4624 generic.go:334] "Generic (PLEG): container finished" podID="cef87e72-b520-465e-b2da-72352cd252c9" containerID="e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b" exitCode=0 Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.616586 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" event={"ID":"cef87e72-b520-465e-b2da-72352cd252c9","Type":"ContainerDied","Data":"e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b"} Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.616623 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" event={"ID":"cef87e72-b520-465e-b2da-72352cd252c9","Type":"ContainerDied","Data":"cbf0c62f7499838bfadf3ce87743bc5a4044deb862794ae2137ba6be516c7d8f"} Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.618418 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8574f8b9f-bp8qf" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.620754 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-serving-cert\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.626861 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-config\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.630819 4624 generic.go:334] "Generic (PLEG): container finished" podID="4404cbca-d669-484a-8354-fc91a39103d3" containerID="2c25ea2f66cc955d81fab43d37af75263b8de329b9a619f288d25b992a9694cc" exitCode=0 Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.630877 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4404cbca-d669-484a-8354-fc91a39103d3","Type":"ContainerDied","Data":"2c25ea2f66cc955d81fab43d37af75263b8de329b9a619f288d25b992a9694cc"} Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.664137 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9pp\" (UniqueName: \"kubernetes.io/projected/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-kube-api-access-bn9pp\") pod \"controller-manager-bb8f8c5bf-dzss4\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.670906 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.670888902 podStartE2EDuration="7.670888902s" podCreationTimestamp="2026-02-28 03:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:30.646379297 +0000 UTC m=+165.310418606" watchObservedRunningTime="2026-02-28 03:38:30.670888902 +0000 UTC m=+165.334928211" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.676634 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.687748 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7487f8fd6f-xn6m4"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.692473 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.714786 4624 scope.go:117] "RemoveContainer" containerID="cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35" Feb 28 03:38:30 crc kubenswrapper[4624]: E0228 03:38:30.719121 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35\": container with ID starting with cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35 not found: ID does not exist" containerID="cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.719194 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35"} err="failed to get container status \"cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35\": rpc error: code = NotFound desc = could not find container \"cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35\": container with ID starting with cf3bd1cebb63cdcb17d42dfe0d34015850ca5f062266cd5cd61cb752312dac35 not found: ID does not exist" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.719248 4624 scope.go:117] "RemoveContainer" containerID="e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.728717 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0f2ed2-34e3-4d84-a543-898fa4baec08-serving-cert\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.729148 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-config\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.729212 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-client-ca\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.729355 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf9j2\" (UniqueName: \"kubernetes.io/projected/1e0f2ed2-34e3-4d84-a543-898fa4baec08-kube-api-access-pf9j2\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.731920 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-client-ca\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.732155 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-config\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.737328 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0f2ed2-34e3-4d84-a543-898fa4baec08-serving-cert\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.767878 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf9j2\" (UniqueName: \"kubernetes.io/projected/1e0f2ed2-34e3-4d84-a543-898fa4baec08-kube-api-access-pf9j2\") pod \"route-controller-manager-557d9f566-lxhnw\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.775019 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8574f8b9f-bp8qf"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.785222 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8574f8b9f-bp8qf"] Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.818829 4624 scope.go:117] "RemoveContainer" containerID="e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b" Feb 28 03:38:30 crc kubenswrapper[4624]: E0228 03:38:30.823333 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b\": container with ID starting with e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b not found: ID does not exist" containerID="e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.823378 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b"} err="failed to get container status \"e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b\": rpc error: code = NotFound desc = could not find container \"e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b\": container with ID starting with e7848adad5763f9121b4023ed6185f0c87815f5f5e07a366e11d980a562f9a8b not found: ID does not exist" Feb 28 03:38:30 crc kubenswrapper[4624]: I0228 03:38:30.999285 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.155753 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4"] Feb 28 03:38:31 crc kubenswrapper[4624]: W0228 03:38:31.166756 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd9b36be_a52a_441b_a5b1_0373ce8cdec0.slice/crio-eac425e0e65e5836cb6cd0cc0d7c7312d6a07b689167d42b9a996d6ab4195738 WatchSource:0}: Error finding container eac425e0e65e5836cb6cd0cc0d7c7312d6a07b689167d42b9a996d6ab4195738: Status 404 returned error can't find the container with id eac425e0e65e5836cb6cd0cc0d7c7312d6a07b689167d42b9a996d6ab4195738 Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.540938 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw"] Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.664363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n698" event={"ID":"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f","Type":"ContainerStarted","Data":"c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c"} Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.673801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" event={"ID":"1e0f2ed2-34e3-4d84-a543-898fa4baec08","Type":"ContainerStarted","Data":"e038db960c84021fa4c58cda9c9b7989181bee15e47bed5cd67151c084951d79"} Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.686422 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" event={"ID":"fd9b36be-a52a-441b-a5b1-0373ce8cdec0","Type":"ContainerStarted","Data":"72113634213f877f918794e6a83e43d8df52d68e4df1a9bf167c5417a79a8661"} Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.686486 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" event={"ID":"fd9b36be-a52a-441b-a5b1-0373ce8cdec0","Type":"ContainerStarted","Data":"eac425e0e65e5836cb6cd0cc0d7c7312d6a07b689167d42b9a996d6ab4195738"} Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.687548 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.716598 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.741761 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2n698" podStartSLOduration=6.491543484 podStartE2EDuration="48.741742802s" podCreationTimestamp="2026-02-28 03:37:43 +0000 UTC" firstStartedPulling="2026-02-28 03:37:47.751046604 +0000 UTC m=+122.415085903" lastFinishedPulling="2026-02-28 03:38:30.001245912 +0000 UTC m=+164.665285221" observedRunningTime="2026-02-28 03:38:31.740824858 +0000 UTC m=+166.404864157" watchObservedRunningTime="2026-02-28 03:38:31.741742802 +0000 UTC m=+166.405782111" Feb 28 03:38:31 crc kubenswrapper[4624]: I0228 03:38:31.763719 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" podStartSLOduration=11.763680867 podStartE2EDuration="11.763680867s" podCreationTimestamp="2026-02-28 03:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:31.757382607 +0000 UTC m=+166.421421916" watchObservedRunningTime="2026-02-28 03:38:31.763680867 +0000 UTC m=+166.427720176" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.110409 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1331a3bd-c6c9-4357-83e7-9a37bccdaf44" path="/var/lib/kubelet/pods/1331a3bd-c6c9-4357-83e7-9a37bccdaf44/volumes" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.111431 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef87e72-b520-465e-b2da-72352cd252c9" path="/var/lib/kubelet/pods/cef87e72-b520-465e-b2da-72352cd252c9/volumes" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.154204 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.274663 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4404cbca-d669-484a-8354-fc91a39103d3-kube-api-access\") pod \"4404cbca-d669-484a-8354-fc91a39103d3\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.274803 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4404cbca-d669-484a-8354-fc91a39103d3-kubelet-dir\") pod \"4404cbca-d669-484a-8354-fc91a39103d3\" (UID: \"4404cbca-d669-484a-8354-fc91a39103d3\") " Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.275116 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4404cbca-d669-484a-8354-fc91a39103d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4404cbca-d669-484a-8354-fc91a39103d3" (UID: "4404cbca-d669-484a-8354-fc91a39103d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.293062 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4404cbca-d669-484a-8354-fc91a39103d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4404cbca-d669-484a-8354-fc91a39103d3" (UID: "4404cbca-d669-484a-8354-fc91a39103d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.381269 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4404cbca-d669-484a-8354-fc91a39103d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.381326 4624 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4404cbca-d669-484a-8354-fc91a39103d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.700260 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.700425 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4404cbca-d669-484a-8354-fc91a39103d3","Type":"ContainerDied","Data":"18c57a824ea0b6fd559f54d213717ef0af6e3b24566934f03ea1eb10de90ca8f"} Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.700494 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c57a824ea0b6fd559f54d213717ef0af6e3b24566934f03ea1eb10de90ca8f" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.703595 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" event={"ID":"1e0f2ed2-34e3-4d84-a543-898fa4baec08","Type":"ContainerStarted","Data":"50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f"} Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.704502 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.727871 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:32 crc kubenswrapper[4624]: I0228 03:38:32.735294 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" podStartSLOduration=12.735265306 podStartE2EDuration="12.735265306s" podCreationTimestamp="2026-02-28 03:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:32.734048572 +0000 UTC m=+167.398087881" watchObservedRunningTime="2026-02-28 03:38:32.735265306 +0000 UTC m=+167.399304615" Feb 28 03:38:34 crc kubenswrapper[4624]: I0228 03:38:34.493700 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:38:34 crc kubenswrapper[4624]: I0228 03:38:34.494260 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:38:34 crc kubenswrapper[4624]: I0228 03:38:34.639709 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:38:37 crc kubenswrapper[4624]: E0228 03:38:37.995456 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7f17b2_3180_41e3_a8cf_1f40338eadf0.slice/crio-7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice/crio-af234837cebb3511a0ee0c903cb1734ad50de074a0fa4c648b5c2b497ac2d72f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28db6c5_346b_4b5a_be0d_0a0165ae4c8c.slice\": RecentStats: unable to find data in memory cache]" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.243648 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4"] Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.244170 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" podUID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" containerName="controller-manager" containerID="cri-o://72113634213f877f918794e6a83e43d8df52d68e4df1a9bf167c5417a79a8661" gracePeriod=30 Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.274712 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw"] Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.274976 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" podUID="1e0f2ed2-34e3-4d84-a543-898fa4baec08" containerName="route-controller-manager" containerID="cri-o://50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f" gracePeriod=30 Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.832497 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.861163 4624 generic.go:334] "Generic (PLEG): container finished" podID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerID="733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706" exitCode=0 Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.861247 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5lxr" event={"ID":"39526829-389b-49e1-8a31-5fee6a4ffa8f","Type":"ContainerDied","Data":"733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.864864 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerStarted","Data":"4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.874098 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerStarted","Data":"6fc164f32767a348e385160ee2f32cdb74fc0978ca2a0297033e69a04da45eb2"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.878137 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" event={"ID":"70fd0c80-14b5-4af4-bc5a-7ca64460bc65","Type":"ContainerStarted","Data":"3101d6ac8b3373b027116bbb16248a0a75ca938bf16ce168335013f09725cc05"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.879903 4624 generic.go:334] "Generic (PLEG): container finished" podID="1e0f2ed2-34e3-4d84-a543-898fa4baec08" containerID="50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f" exitCode=0 Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.879980 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" event={"ID":"1e0f2ed2-34e3-4d84-a543-898fa4baec08","Type":"ContainerDied","Data":"50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.880012 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" event={"ID":"1e0f2ed2-34e3-4d84-a543-898fa4baec08","Type":"ContainerDied","Data":"e038db960c84021fa4c58cda9c9b7989181bee15e47bed5cd67151c084951d79"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.880032 4624 scope.go:117] "RemoveContainer" containerID="50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.880166 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.882577 4624 generic.go:334] "Generic (PLEG): container finished" podID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" containerID="72113634213f877f918794e6a83e43d8df52d68e4df1a9bf167c5417a79a8661" exitCode=0 Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.882611 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" event={"ID":"fd9b36be-a52a-441b-a5b1-0373ce8cdec0","Type":"ContainerDied","Data":"72113634213f877f918794e6a83e43d8df52d68e4df1a9bf167c5417a79a8661"} Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.896111 4624 scope.go:117] "RemoveContainer" containerID="50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f" Feb 28 03:38:40 crc kubenswrapper[4624]: E0228 03:38:40.899388 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f\": container with ID starting with 50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f not found: ID does not exist" containerID="50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.899435 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f"} err="failed to get container status \"50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f\": rpc error: code = NotFound desc = could not find container \"50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f\": container with ID starting with 50330efa8e8e579b78700251ae2ff3729d88dfcbeac7b0d63b11887abcc5657f not found: ID does not exist" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.900806 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf9j2\" (UniqueName: \"kubernetes.io/projected/1e0f2ed2-34e3-4d84-a543-898fa4baec08-kube-api-access-pf9j2\") pod \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.900948 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0f2ed2-34e3-4d84-a543-898fa4baec08-serving-cert\") pod \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.901004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-client-ca\") pod \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.901062 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-config\") pod \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\" (UID: \"1e0f2ed2-34e3-4d84-a543-898fa4baec08\") " Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.903999 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-config" (OuterVolumeSpecName: "config") pod "1e0f2ed2-34e3-4d84-a543-898fa4baec08" (UID: "1e0f2ed2-34e3-4d84-a543-898fa4baec08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.905055 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e0f2ed2-34e3-4d84-a543-898fa4baec08" (UID: "1e0f2ed2-34e3-4d84-a543-898fa4baec08"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.910307 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0f2ed2-34e3-4d84-a543-898fa4baec08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e0f2ed2-34e3-4d84-a543-898fa4baec08" (UID: "1e0f2ed2-34e3-4d84-a543-898fa4baec08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.910469 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0f2ed2-34e3-4d84-a543-898fa4baec08-kube-api-access-pf9j2" (OuterVolumeSpecName: "kube-api-access-pf9j2") pod "1e0f2ed2-34e3-4d84-a543-898fa4baec08" (UID: "1e0f2ed2-34e3-4d84-a543-898fa4baec08"). InnerVolumeSpecName "kube-api-access-pf9j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.957743 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" podStartSLOduration=29.859197099 podStartE2EDuration="40.95772453s" podCreationTimestamp="2026-02-28 03:38:00 +0000 UTC" firstStartedPulling="2026-02-28 03:38:28.966692145 +0000 UTC m=+163.630731454" lastFinishedPulling="2026-02-28 03:38:40.065219586 +0000 UTC m=+174.729258885" observedRunningTime="2026-02-28 03:38:40.955926001 +0000 UTC m=+175.619965310" watchObservedRunningTime="2026-02-28 03:38:40.95772453 +0000 UTC m=+175.621763829" Feb 28 03:38:40 crc kubenswrapper[4624]: I0228 03:38:40.972217 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.002722 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn9pp\" (UniqueName: \"kubernetes.io/projected/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-kube-api-access-bn9pp\") pod \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.002780 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-proxy-ca-bundles\") pod \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.002856 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-client-ca\") pod \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.002932 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-serving-cert\") pod \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.003037 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-config\") pod \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\" (UID: \"fd9b36be-a52a-441b-a5b1-0373ce8cdec0\") " Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.003332 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.003352 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf9j2\" (UniqueName: \"kubernetes.io/projected/1e0f2ed2-34e3-4d84-a543-898fa4baec08-kube-api-access-pf9j2\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.003362 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0f2ed2-34e3-4d84-a543-898fa4baec08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.003373 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0f2ed2-34e3-4d84-a543-898fa4baec08-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.004014 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd9b36be-a52a-441b-a5b1-0373ce8cdec0" (UID: "fd9b36be-a52a-441b-a5b1-0373ce8cdec0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.004206 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd9b36be-a52a-441b-a5b1-0373ce8cdec0" (UID: "fd9b36be-a52a-441b-a5b1-0373ce8cdec0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.005350 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-config" (OuterVolumeSpecName: "config") pod "fd9b36be-a52a-441b-a5b1-0373ce8cdec0" (UID: "fd9b36be-a52a-441b-a5b1-0373ce8cdec0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.006227 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-kube-api-access-bn9pp" (OuterVolumeSpecName: "kube-api-access-bn9pp") pod "fd9b36be-a52a-441b-a5b1-0373ce8cdec0" (UID: "fd9b36be-a52a-441b-a5b1-0373ce8cdec0"). InnerVolumeSpecName "kube-api-access-bn9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.009381 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd9b36be-a52a-441b-a5b1-0373ce8cdec0" (UID: "fd9b36be-a52a-441b-a5b1-0373ce8cdec0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.105018 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.105706 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn9pp\" (UniqueName: \"kubernetes.io/projected/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-kube-api-access-bn9pp\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.105785 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.105851 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.105922 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd9b36be-a52a-441b-a5b1-0373ce8cdec0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.209338 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw"] Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.217997 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-557d9f566-lxhnw"] Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.277496 4624 csr.go:261] certificate signing request csr-spfdt is approved, waiting to be issued Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.285576 4624 csr.go:257] certificate signing request csr-spfdt is issued Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.694323 4624 patch_prober.go:28] interesting pod/controller-manager-bb8f8c5bf-dzss4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.694402 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" podUID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.699141 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerName="oauth-openshift" containerID="cri-o://58873711eec886da6eddac8c3822efa5df942c4ed2483268d042eda2bfa84cb2" gracePeriod=15 Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.836669 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67fdcb99c-44lth"] Feb 28 03:38:41 crc kubenswrapper[4624]: E0228 03:38:41.837355 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4404cbca-d669-484a-8354-fc91a39103d3" containerName="pruner" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.837391 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4404cbca-d669-484a-8354-fc91a39103d3" containerName="pruner" Feb 28 03:38:41 crc kubenswrapper[4624]: E0228 03:38:41.837417 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" containerName="controller-manager" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.837433 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" containerName="controller-manager" Feb 28 03:38:41 crc kubenswrapper[4624]: E0228 03:38:41.837459 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0f2ed2-34e3-4d84-a543-898fa4baec08" containerName="route-controller-manager" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.837474 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0f2ed2-34e3-4d84-a543-898fa4baec08" containerName="route-controller-manager" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.837641 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" containerName="controller-manager" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.837685 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0f2ed2-34e3-4d84-a543-898fa4baec08" containerName="route-controller-manager" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.837712 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4404cbca-d669-484a-8354-fc91a39103d3" containerName="pruner" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.838461 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.846633 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5"] Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.848297 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.851200 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.851485 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.851832 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.857240 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.857803 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.860930 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.862946 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5"] Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.899238 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fdcb99c-44lth"] Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.915876 4624 generic.go:334] "Generic (PLEG): container finished" podID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerID="4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec" exitCode=0 Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.916026 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerDied","Data":"4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec"} Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918730 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-client-ca\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918799 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hclx\" (UniqueName: \"kubernetes.io/projected/6ea3687a-e119-425f-87cb-de1ef4ee14f1-kube-api-access-5hclx\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918874 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-config\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918901 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv5pt\" (UniqueName: \"kubernetes.io/projected/72613473-b58a-4d55-9fa2-2e751ee1312b-kube-api-access-xv5pt\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918930 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-proxy-ca-bundles\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918967 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-config\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.918991 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-client-ca\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.919028 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea3687a-e119-425f-87cb-de1ef4ee14f1-serving-cert\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.919051 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72613473-b58a-4d55-9fa2-2e751ee1312b-serving-cert\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.919589 4624 generic.go:334] "Generic (PLEG): container finished" podID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerID="6fc164f32767a348e385160ee2f32cdb74fc0978ca2a0297033e69a04da45eb2" exitCode=0 Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.919657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerDied","Data":"6fc164f32767a348e385160ee2f32cdb74fc0978ca2a0297033e69a04da45eb2"} Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.923404 4624 generic.go:334] "Generic (PLEG): container finished" podID="70fd0c80-14b5-4af4-bc5a-7ca64460bc65" containerID="3101d6ac8b3373b027116bbb16248a0a75ca938bf16ce168335013f09725cc05" exitCode=0 Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.923480 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" event={"ID":"70fd0c80-14b5-4af4-bc5a-7ca64460bc65","Type":"ContainerDied","Data":"3101d6ac8b3373b027116bbb16248a0a75ca938bf16ce168335013f09725cc05"} Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.925971 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" event={"ID":"fd9b36be-a52a-441b-a5b1-0373ce8cdec0","Type":"ContainerDied","Data":"eac425e0e65e5836cb6cd0cc0d7c7312d6a07b689167d42b9a996d6ab4195738"} Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.926012 4624 scope.go:117] "RemoveContainer" containerID="72113634213f877f918794e6a83e43d8df52d68e4df1a9bf167c5417a79a8661" Feb 28 03:38:41 crc kubenswrapper[4624]: I0228 03:38:41.926417 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.000054 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4"] Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.002859 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bb8f8c5bf-dzss4"] Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.020049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hclx\" (UniqueName: \"kubernetes.io/projected/6ea3687a-e119-425f-87cb-de1ef4ee14f1-kube-api-access-5hclx\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.020145 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-client-ca\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.021034 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-client-ca\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.021165 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-config\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.021196 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv5pt\" (UniqueName: \"kubernetes.io/projected/72613473-b58a-4d55-9fa2-2e751ee1312b-kube-api-access-xv5pt\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.022257 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-proxy-ca-bundles\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.022194 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-config\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.022322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-config\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.022344 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-client-ca\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.023260 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea3687a-e119-425f-87cb-de1ef4ee14f1-serving-cert\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.023384 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72613473-b58a-4d55-9fa2-2e751ee1312b-serving-cert\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.023457 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-proxy-ca-bundles\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.023293 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-config\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.023851 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-client-ca\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.038002 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea3687a-e119-425f-87cb-de1ef4ee14f1-serving-cert\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.038030 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72613473-b58a-4d55-9fa2-2e751ee1312b-serving-cert\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.043409 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hclx\" (UniqueName: \"kubernetes.io/projected/6ea3687a-e119-425f-87cb-de1ef4ee14f1-kube-api-access-5hclx\") pod \"controller-manager-67fdcb99c-44lth\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.044254 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv5pt\" (UniqueName: \"kubernetes.io/projected/72613473-b58a-4d55-9fa2-2e751ee1312b-kube-api-access-xv5pt\") pod \"route-controller-manager-5cfd9b9c67-4qwp5\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.107677 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0f2ed2-34e3-4d84-a543-898fa4baec08" path="/var/lib/kubelet/pods/1e0f2ed2-34e3-4d84-a543-898fa4baec08/volumes" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.108670 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9b36be-a52a-441b-a5b1-0373ce8cdec0" path="/var/lib/kubelet/pods/fd9b36be-a52a-441b-a5b1-0373ce8cdec0/volumes" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.158684 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.168618 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.287137 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-06 04:36:18.546185331 +0000 UTC Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.287188 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7488h57m36.259002578s for next certificate rotation Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.939361 4624 generic.go:334] "Generic (PLEG): container finished" podID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerID="58873711eec886da6eddac8c3822efa5df942c4ed2483268d042eda2bfa84cb2" exitCode=0 Feb 28 03:38:42 crc kubenswrapper[4624]: I0228 03:38:42.939471 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" event={"ID":"e8913a76-5e7d-4d49-a9a4-388c052cf594","Type":"ContainerDied","Data":"58873711eec886da6eddac8c3822efa5df942c4ed2483268d042eda2bfa84cb2"} Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.287350 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-27 23:46:17.128458132 +0000 UTC Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.287399 4624 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6548h7m33.841062865s for next certificate rotation Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.415348 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.427736 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446458 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-ocp-branding-template\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446544 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-error\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446596 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-idp-0-file-data\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446617 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-serving-cert\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446653 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-provider-selection\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446698 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-policies\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446725 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-login\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446750 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-cliconfig\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446863 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-router-certs\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446884 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-session\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446952 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-service-ca\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.446982 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxpfb\" (UniqueName: \"kubernetes.io/projected/70fd0c80-14b5-4af4-bc5a-7ca64460bc65-kube-api-access-kxpfb\") pod \"70fd0c80-14b5-4af4-bc5a-7ca64460bc65\" (UID: \"70fd0c80-14b5-4af4-bc5a-7ca64460bc65\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.447026 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cnb4\" (UniqueName: \"kubernetes.io/projected/e8913a76-5e7d-4d49-a9a4-388c052cf594-kube-api-access-4cnb4\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.447048 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-trusted-ca-bundle\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.447072 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-dir\") pod \"e8913a76-5e7d-4d49-a9a4-388c052cf594\" (UID: \"e8913a76-5e7d-4d49-a9a4-388c052cf594\") " Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.447324 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.450927 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.452074 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.456284 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.457907 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.458386 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.473303 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.476674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.477131 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.491608 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.491800 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8913a76-5e7d-4d49-a9a4-388c052cf594-kube-api-access-4cnb4" (OuterVolumeSpecName: "kube-api-access-4cnb4") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "kube-api-access-4cnb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.498593 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fd0c80-14b5-4af4-bc5a-7ca64460bc65-kube-api-access-kxpfb" (OuterVolumeSpecName: "kube-api-access-kxpfb") pod "70fd0c80-14b5-4af4-bc5a-7ca64460bc65" (UID: "70fd0c80-14b5-4af4-bc5a-7ca64460bc65"). InnerVolumeSpecName "kube-api-access-kxpfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.499077 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.517679 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.519170 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e8913a76-5e7d-4d49-a9a4-388c052cf594" (UID: "e8913a76-5e7d-4d49-a9a4-388c052cf594"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552296 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552388 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552410 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552427 4624 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552444 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552467 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552481 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552498 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552510 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552524 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxpfb\" (UniqueName: \"kubernetes.io/projected/70fd0c80-14b5-4af4-bc5a-7ca64460bc65-kube-api-access-kxpfb\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552539 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cnb4\" (UniqueName: \"kubernetes.io/projected/e8913a76-5e7d-4d49-a9a4-388c052cf594-kube-api-access-4cnb4\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552551 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552564 4624 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8913a76-5e7d-4d49-a9a4-388c052cf594-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552577 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.552594 4624 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8913a76-5e7d-4d49-a9a4-388c052cf594-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.770917 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5"] Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.798107 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fdcb99c-44lth"] Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.833709 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6994c46f75-gjtb6"] Feb 28 03:38:43 crc kubenswrapper[4624]: E0228 03:38:43.833948 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fd0c80-14b5-4af4-bc5a-7ca64460bc65" containerName="oc" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.833964 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fd0c80-14b5-4af4-bc5a-7ca64460bc65" containerName="oc" Feb 28 03:38:43 crc kubenswrapper[4624]: E0228 03:38:43.833979 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerName="oauth-openshift" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.833985 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerName="oauth-openshift" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.834110 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fd0c80-14b5-4af4-bc5a-7ca64460bc65" containerName="oc" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.834123 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" containerName="oauth-openshift" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.834624 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.864264 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6994c46f75-gjtb6"] Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.954285 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" event={"ID":"72613473-b58a-4d55-9fa2-2e751ee1312b","Type":"ContainerStarted","Data":"71291d97d37e72001ee643780cb388f46bd3e930fab96f14a69df6c9e0fefcbb"} Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.956019 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerStarted","Data":"54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba"} Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961570 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-login\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961647 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961676 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961705 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerStarted","Data":"e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91"} Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961729 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-router-certs\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961747 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-error\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961768 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961791 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac55e312-5949-43b0-bc55-e445b4be3952-audit-dir\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961812 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961829 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-service-ca\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961848 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961868 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf8p\" (UniqueName: \"kubernetes.io/projected/ac55e312-5949-43b0-bc55-e445b4be3952-kube-api-access-cdf8p\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961887 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-session\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961909 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-audit-policies\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.961929 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.969024 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.969293 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537498-x9wgz" event={"ID":"70fd0c80-14b5-4af4-bc5a-7ca64460bc65","Type":"ContainerDied","Data":"2215d1e8398b9e4fa142870ec52839e546dbeb9f63d71ae53536d79c03ada835"} Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.969344 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2215d1e8398b9e4fa142870ec52839e546dbeb9f63d71ae53536d79c03ada835" Feb 28 03:38:43 crc kubenswrapper[4624]: I0228 03:38:43.990438 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" event={"ID":"6ea3687a-e119-425f-87cb-de1ef4ee14f1","Type":"ContainerStarted","Data":"a96de88f699bade54332df50b3b18b4996d8990ea3897259ff2ce7940e80a36e"} Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.006939 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerStarted","Data":"9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5"} Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.014307 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" event={"ID":"e8913a76-5e7d-4d49-a9a4-388c052cf594","Type":"ContainerDied","Data":"556fd9337129fdf7ae397c61bcea7e5a7dd52417c1ae7a81c0e987353e859c9e"} Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.014710 4624 scope.go:117] "RemoveContainer" containerID="58873711eec886da6eddac8c3822efa5df942c4ed2483268d042eda2bfa84cb2" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.014947 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pcvf9" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063657 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063716 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-router-certs\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063737 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-error\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063756 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063780 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac55e312-5949-43b0-bc55-e445b4be3952-audit-dir\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063798 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063814 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-service-ca\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063835 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063852 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf8p\" (UniqueName: \"kubernetes.io/projected/ac55e312-5949-43b0-bc55-e445b4be3952-kube-api-access-cdf8p\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063868 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-session\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063901 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-audit-policies\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063924 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063945 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-login\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.063994 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.064074 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac55e312-5949-43b0-bc55-e445b4be3952-audit-dir\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.065863 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-audit-policies\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.076517 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-router-certs\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.077967 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.078850 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-service-ca\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.079644 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.080644 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-session\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.080835 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.082274 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.084754 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.084879 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.086835 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-error\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.098103 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac55e312-5949-43b0-bc55-e445b4be3952-v4-0-config-user-template-login\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.102596 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf8p\" (UniqueName: \"kubernetes.io/projected/ac55e312-5949-43b0-bc55-e445b4be3952-kube-api-access-cdf8p\") pod \"oauth-openshift-6994c46f75-gjtb6\" (UID: \"ac55e312-5949-43b0-bc55-e445b4be3952\") " pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.118627 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pcvf9"] Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.120793 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pcvf9"] Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.168736 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.583889 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:38:44 crc kubenswrapper[4624]: I0228 03:38:44.638545 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6994c46f75-gjtb6"] Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.022473 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerStarted","Data":"f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.023157 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" event={"ID":"ac55e312-5949-43b0-bc55-e445b4be3952","Type":"ContainerStarted","Data":"2e51dd691ee19871e9abf9108ace192208982db4ba45bf58d6df2e26023869bb"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.026293 4624 generic.go:334] "Generic (PLEG): container finished" podID="51920ae4-b602-4113-b233-57fdef96cd52" containerID="9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5" exitCode=0 Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.026405 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerDied","Data":"9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.029933 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" event={"ID":"72613473-b58a-4d55-9fa2-2e751ee1312b","Type":"ContainerStarted","Data":"60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.030015 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.044073 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerStarted","Data":"361e731455777467df96c6f71224946022fac1249be246873e7ea4cc9ff11e66"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.044939 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.046752 4624 generic.go:334] "Generic (PLEG): container finished" podID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerID="e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91" exitCode=0 Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.046804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerDied","Data":"e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.050423 4624 generic.go:334] "Generic (PLEG): container finished" podID="9a269916-9894-4dcf-99db-7df5a1791898" containerID="54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba" exitCode=0 Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.050474 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerDied","Data":"54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.054900 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" event={"ID":"6ea3687a-e119-425f-87cb-de1ef4ee14f1","Type":"ContainerStarted","Data":"54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.054928 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.075116 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5lxr" event={"ID":"39526829-389b-49e1-8a31-5fee6a4ffa8f","Type":"ContainerStarted","Data":"2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.081073 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.081936 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerStarted","Data":"cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2"} Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.107787 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kgmt8" podStartSLOduration=6.802084943 podStartE2EDuration="59.107768405s" podCreationTimestamp="2026-02-28 03:37:46 +0000 UTC" firstStartedPulling="2026-02-28 03:37:51.4988419 +0000 UTC m=+126.162881209" lastFinishedPulling="2026-02-28 03:38:43.804525362 +0000 UTC m=+178.468564671" observedRunningTime="2026-02-28 03:38:45.102552463 +0000 UTC m=+179.766591772" watchObservedRunningTime="2026-02-28 03:38:45.107768405 +0000 UTC m=+179.771807714" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.210076 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" podStartSLOduration=5.210049409 podStartE2EDuration="5.210049409s" podCreationTimestamp="2026-02-28 03:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:45.153664959 +0000 UTC m=+179.817704268" watchObservedRunningTime="2026-02-28 03:38:45.210049409 +0000 UTC m=+179.874088718" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.217677 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" podStartSLOduration=5.217651925 podStartE2EDuration="5.217651925s" podCreationTimestamp="2026-02-28 03:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:45.206174914 +0000 UTC m=+179.870214223" watchObservedRunningTime="2026-02-28 03:38:45.217651925 +0000 UTC m=+179.881691234" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.343285 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x54xc" podStartSLOduration=5.212044774 podStartE2EDuration="59.343265772s" podCreationTimestamp="2026-02-28 03:37:46 +0000 UTC" firstStartedPulling="2026-02-28 03:37:49.992629344 +0000 UTC m=+124.656668653" lastFinishedPulling="2026-02-28 03:38:44.123850342 +0000 UTC m=+178.787889651" observedRunningTime="2026-02-28 03:38:45.342179452 +0000 UTC m=+180.006218751" watchObservedRunningTime="2026-02-28 03:38:45.343265772 +0000 UTC m=+180.007305081" Feb 28 03:38:45 crc kubenswrapper[4624]: I0228 03:38:45.364343 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r5lxr" podStartSLOduration=5.555743324 podStartE2EDuration="59.364324713s" podCreationTimestamp="2026-02-28 03:37:46 +0000 UTC" firstStartedPulling="2026-02-28 03:37:50.204377926 +0000 UTC m=+124.868417235" lastFinishedPulling="2026-02-28 03:38:44.012959315 +0000 UTC m=+178.676998624" observedRunningTime="2026-02-28 03:38:45.36056661 +0000 UTC m=+180.024605919" watchObservedRunningTime="2026-02-28 03:38:45.364324713 +0000 UTC m=+180.028364022" Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.101497 4624 generic.go:334] "Generic (PLEG): container finished" podID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerID="f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef" exitCode=0 Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.143573 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8913a76-5e7d-4d49-a9a4-388c052cf594" path="/var/lib/kubelet/pods/e8913a76-5e7d-4d49-a9a4-388c052cf594/volumes" Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.144613 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.144713 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerDied","Data":"f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef"} Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.144845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" event={"ID":"ac55e312-5949-43b0-bc55-e445b4be3952","Type":"ContainerStarted","Data":"152950bc7ef6440591d6f100bdc113886310d0c24df22592df4020040848c385"} Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.168294 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" podStartSLOduration=30.168272745 podStartE2EDuration="30.168272745s" podCreationTimestamp="2026-02-28 03:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:38:46.165700505 +0000 UTC m=+180.829739824" watchObservedRunningTime="2026-02-28 03:38:46.168272745 +0000 UTC m=+180.832312054" Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.415878 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6994c46f75-gjtb6" Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.681031 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:38:46 crc kubenswrapper[4624]: I0228 03:38:46.681102 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.078743 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.081305 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.114837 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerStarted","Data":"fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce"} Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.118196 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerStarted","Data":"8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c"} Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.120694 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerStarted","Data":"243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4"} Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.124180 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerStarted","Data":"f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79"} Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.142179 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jjcv7" podStartSLOduration=4.192835277 podStartE2EDuration="1m4.142154326s" podCreationTimestamp="2026-02-28 03:37:43 +0000 UTC" firstStartedPulling="2026-02-28 03:37:46.620962758 +0000 UTC m=+121.285002077" lastFinishedPulling="2026-02-28 03:38:46.570281817 +0000 UTC m=+181.234321126" observedRunningTime="2026-02-28 03:38:47.139567666 +0000 UTC m=+181.803606975" watchObservedRunningTime="2026-02-28 03:38:47.142154326 +0000 UTC m=+181.806193635" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.175263 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h895m" podStartSLOduration=4.876745182 podStartE2EDuration="1m3.175242343s" podCreationTimestamp="2026-02-28 03:37:44 +0000 UTC" firstStartedPulling="2026-02-28 03:37:47.73947283 +0000 UTC m=+122.403512139" lastFinishedPulling="2026-02-28 03:38:46.037969991 +0000 UTC m=+180.702009300" observedRunningTime="2026-02-28 03:38:47.172948401 +0000 UTC m=+181.836987710" watchObservedRunningTime="2026-02-28 03:38:47.175242343 +0000 UTC m=+181.839281652" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.198699 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2pjq" podStartSLOduration=5.149360775 podStartE2EDuration="1m2.198674499s" podCreationTimestamp="2026-02-28 03:37:45 +0000 UTC" firstStartedPulling="2026-02-28 03:37:48.895635224 +0000 UTC m=+123.559674533" lastFinishedPulling="2026-02-28 03:38:45.944948948 +0000 UTC m=+180.608988257" observedRunningTime="2026-02-28 03:38:47.196069058 +0000 UTC m=+181.860108367" watchObservedRunningTime="2026-02-28 03:38:47.198674499 +0000 UTC m=+181.862713808" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.229051 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qw86g" podStartSLOduration=6.044511743 podStartE2EDuration="1m4.229029662s" podCreationTimestamp="2026-02-28 03:37:43 +0000 UTC" firstStartedPulling="2026-02-28 03:37:47.736670244 +0000 UTC m=+122.400709553" lastFinishedPulling="2026-02-28 03:38:45.921188163 +0000 UTC m=+180.585227472" observedRunningTime="2026-02-28 03:38:47.224334934 +0000 UTC m=+181.888374243" watchObservedRunningTime="2026-02-28 03:38:47.229029662 +0000 UTC m=+181.893068971" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.524608 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.525958 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:38:47 crc kubenswrapper[4624]: I0228 03:38:47.749025 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-r5lxr" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="registry-server" probeResult="failure" output=< Feb 28 03:38:47 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:38:47 crc kubenswrapper[4624]: > Feb 28 03:38:48 crc kubenswrapper[4624]: I0228 03:38:48.139436 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x54xc" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="registry-server" probeResult="failure" output=< Feb 28 03:38:48 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:38:48 crc kubenswrapper[4624]: > Feb 28 03:38:48 crc kubenswrapper[4624]: I0228 03:38:48.569776 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kgmt8" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="registry-server" probeResult="failure" output=< Feb 28 03:38:48 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:38:48 crc kubenswrapper[4624]: > Feb 28 03:38:53 crc kubenswrapper[4624]: I0228 03:38:53.977604 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:38:53 crc kubenswrapper[4624]: I0228 03:38:53.977701 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:38:54 crc kubenswrapper[4624]: I0228 03:38:54.060240 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:38:54 crc kubenswrapper[4624]: I0228 03:38:54.249010 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:38:54 crc kubenswrapper[4624]: I0228 03:38:54.682022 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:38:54 crc kubenswrapper[4624]: I0228 03:38:54.682141 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:38:54 crc kubenswrapper[4624]: I0228 03:38:54.752824 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:38:55 crc kubenswrapper[4624]: I0228 03:38:55.253436 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:38:55 crc kubenswrapper[4624]: I0228 03:38:55.338116 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:38:55 crc kubenswrapper[4624]: I0228 03:38:55.338809 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:38:55 crc kubenswrapper[4624]: I0228 03:38:55.411722 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.237178 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.332269 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qw86g"] Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.395245 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.395342 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.455924 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.732267 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:38:56 crc kubenswrapper[4624]: I0228 03:38:56.786897 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:38:57 crc kubenswrapper[4624]: I0228 03:38:57.141573 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:38:57 crc kubenswrapper[4624]: I0228 03:38:57.182698 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:38:57 crc kubenswrapper[4624]: I0228 03:38:57.196628 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qw86g" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="registry-server" containerID="cri-o://8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c" gracePeriod=2 Feb 28 03:38:57 crc kubenswrapper[4624]: I0228 03:38:57.270877 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:38:57 crc kubenswrapper[4624]: I0228 03:38:57.564644 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:38:57 crc kubenswrapper[4624]: I0228 03:38:57.603632 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.128416 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.205892 4624 generic.go:334] "Generic (PLEG): container finished" podID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerID="8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c" exitCode=0 Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.206990 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw86g" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.207487 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerDied","Data":"8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c"} Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.207526 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw86g" event={"ID":"1f8aeb46-02be-4b30-abdb-7c378da509ba","Type":"ContainerDied","Data":"e69e1bddf6304b71c3f10939217df7c56a3840365ae97d915be28eaa2369b970"} Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.207551 4624 scope.go:117] "RemoveContainer" containerID="8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.244052 4624 scope.go:117] "RemoveContainer" containerID="e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.245746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-utilities\") pod \"1f8aeb46-02be-4b30-abdb-7c378da509ba\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.245808 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx54v\" (UniqueName: \"kubernetes.io/projected/1f8aeb46-02be-4b30-abdb-7c378da509ba-kube-api-access-rx54v\") pod \"1f8aeb46-02be-4b30-abdb-7c378da509ba\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.245915 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-catalog-content\") pod \"1f8aeb46-02be-4b30-abdb-7c378da509ba\" (UID: \"1f8aeb46-02be-4b30-abdb-7c378da509ba\") " Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.247205 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-utilities" (OuterVolumeSpecName: "utilities") pod "1f8aeb46-02be-4b30-abdb-7c378da509ba" (UID: "1f8aeb46-02be-4b30-abdb-7c378da509ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.268540 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8aeb46-02be-4b30-abdb-7c378da509ba-kube-api-access-rx54v" (OuterVolumeSpecName: "kube-api-access-rx54v") pod "1f8aeb46-02be-4b30-abdb-7c378da509ba" (UID: "1f8aeb46-02be-4b30-abdb-7c378da509ba"). InnerVolumeSpecName "kube-api-access-rx54v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.285287 4624 scope.go:117] "RemoveContainer" containerID="099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.330636 4624 scope.go:117] "RemoveContainer" containerID="8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c" Feb 28 03:38:58 crc kubenswrapper[4624]: E0228 03:38:58.331834 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c\": container with ID starting with 8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c not found: ID does not exist" containerID="8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.331916 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c"} err="failed to get container status \"8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c\": rpc error: code = NotFound desc = could not find container \"8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c\": container with ID starting with 8853575006fc9c2a4d508a0a1123ba285bd3c14e97b4402b62463daa83e8fb6c not found: ID does not exist" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.331949 4624 scope.go:117] "RemoveContainer" containerID="e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.335858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f8aeb46-02be-4b30-abdb-7c378da509ba" (UID: "1f8aeb46-02be-4b30-abdb-7c378da509ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:38:58 crc kubenswrapper[4624]: E0228 03:38:58.338039 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91\": container with ID starting with e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91 not found: ID does not exist" containerID="e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.338130 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91"} err="failed to get container status \"e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91\": rpc error: code = NotFound desc = could not find container \"e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91\": container with ID starting with e33d41478c1f46c2c487c8dbfcd8ee31bf403a253afbe2c41fba8586cc422e91 not found: ID does not exist" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.338161 4624 scope.go:117] "RemoveContainer" containerID="099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41" Feb 28 03:38:58 crc kubenswrapper[4624]: E0228 03:38:58.338746 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41\": container with ID starting with 099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41 not found: ID does not exist" containerID="099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.338780 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41"} err="failed to get container status \"099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41\": rpc error: code = NotFound desc = could not find container \"099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41\": container with ID starting with 099ed24d5139e023ecb98996c0239f5aeef92660d4b69257989be0a760f65e41 not found: ID does not exist" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.347857 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.347885 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx54v\" (UniqueName: \"kubernetes.io/projected/1f8aeb46-02be-4b30-abdb-7c378da509ba-kube-api-access-rx54v\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.347896 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f8aeb46-02be-4b30-abdb-7c378da509ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.530341 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h895m"] Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.545757 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qw86g"] Feb 28 03:38:58 crc kubenswrapper[4624]: I0228 03:38:58.553168 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qw86g"] Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.214100 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h895m" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="registry-server" containerID="cri-o://f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79" gracePeriod=2 Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.777592 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.974292 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-catalog-content\") pod \"9a269916-9894-4dcf-99db-7df5a1791898\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.974432 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znckv\" (UniqueName: \"kubernetes.io/projected/9a269916-9894-4dcf-99db-7df5a1791898-kube-api-access-znckv\") pod \"9a269916-9894-4dcf-99db-7df5a1791898\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.974511 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-utilities\") pod \"9a269916-9894-4dcf-99db-7df5a1791898\" (UID: \"9a269916-9894-4dcf-99db-7df5a1791898\") " Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.975886 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-utilities" (OuterVolumeSpecName: "utilities") pod "9a269916-9894-4dcf-99db-7df5a1791898" (UID: "9a269916-9894-4dcf-99db-7df5a1791898"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:38:59 crc kubenswrapper[4624]: I0228 03:38:59.986447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a269916-9894-4dcf-99db-7df5a1791898-kube-api-access-znckv" (OuterVolumeSpecName: "kube-api-access-znckv") pod "9a269916-9894-4dcf-99db-7df5a1791898" (UID: "9a269916-9894-4dcf-99db-7df5a1791898"). InnerVolumeSpecName "kube-api-access-znckv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.031724 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a269916-9894-4dcf-99db-7df5a1791898" (UID: "9a269916-9894-4dcf-99db-7df5a1791898"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.077294 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.077361 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znckv\" (UniqueName: \"kubernetes.io/projected/9a269916-9894-4dcf-99db-7df5a1791898-kube-api-access-znckv\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.077390 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a269916-9894-4dcf-99db-7df5a1791898-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.100826 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" path="/var/lib/kubelet/pods/1f8aeb46-02be-4b30-abdb-7c378da509ba/volumes" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.224909 4624 generic.go:334] "Generic (PLEG): container finished" podID="9a269916-9894-4dcf-99db-7df5a1791898" containerID="f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79" exitCode=0 Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.224961 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerDied","Data":"f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79"} Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.224994 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h895m" event={"ID":"9a269916-9894-4dcf-99db-7df5a1791898","Type":"ContainerDied","Data":"df6f3a08522e29fa0425b661b836b79ee8486faabc9ee7a67c1c49baea19a9b5"} Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.225015 4624 scope.go:117] "RemoveContainer" containerID="f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.226395 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h895m" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.269544 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h895m"] Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.272789 4624 scope.go:117] "RemoveContainer" containerID="54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.279129 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fdcb99c-44lth"] Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.279736 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" podUID="6ea3687a-e119-425f-87cb-de1ef4ee14f1" containerName="controller-manager" containerID="cri-o://54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3" gracePeriod=30 Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.285052 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h895m"] Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.323203 4624 scope.go:117] "RemoveContainer" containerID="f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.354176 4624 scope.go:117] "RemoveContainer" containerID="f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79" Feb 28 03:39:00 crc kubenswrapper[4624]: E0228 03:39:00.354702 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79\": container with ID starting with f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79 not found: ID does not exist" containerID="f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.354831 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79"} err="failed to get container status \"f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79\": rpc error: code = NotFound desc = could not find container \"f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79\": container with ID starting with f519376317ddb2f53f993c92cc64bd3f356cc00fb99d626207dba15aa3f66c79 not found: ID does not exist" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.354868 4624 scope.go:117] "RemoveContainer" containerID="54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba" Feb 28 03:39:00 crc kubenswrapper[4624]: E0228 03:39:00.355364 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba\": container with ID starting with 54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba not found: ID does not exist" containerID="54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.355389 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba"} err="failed to get container status \"54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba\": rpc error: code = NotFound desc = could not find container \"54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba\": container with ID starting with 54cb656f3808a40202d212d317e77e2a398a7527403d321de89c4ed8ca4772ba not found: ID does not exist" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.355405 4624 scope.go:117] "RemoveContainer" containerID="f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881" Feb 28 03:39:00 crc kubenswrapper[4624]: E0228 03:39:00.355636 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881\": container with ID starting with f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881 not found: ID does not exist" containerID="f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.355660 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881"} err="failed to get container status \"f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881\": rpc error: code = NotFound desc = could not find container \"f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881\": container with ID starting with f7294c681218519517e31fddf842626e805d5596da8eb8fd888d59bd65ffb881 not found: ID does not exist" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.365943 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5"] Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.366217 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" podUID="72613473-b58a-4d55-9fa2-2e751ee1312b" containerName="route-controller-manager" containerID="cri-o://60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a" gracePeriod=30 Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.728913 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5lxr"] Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.729532 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r5lxr" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="registry-server" containerID="cri-o://2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b" gracePeriod=2 Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.872939 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.909928 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.928751 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kgmt8"] Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.929050 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kgmt8" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="registry-server" containerID="cri-o://361e731455777467df96c6f71224946022fac1249be246873e7ea4cc9ff11e66" gracePeriod=2 Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.998380 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72613473-b58a-4d55-9fa2-2e751ee1312b-serving-cert\") pod \"72613473-b58a-4d55-9fa2-2e751ee1312b\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.998457 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-config\") pod \"72613473-b58a-4d55-9fa2-2e751ee1312b\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.998535 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv5pt\" (UniqueName: \"kubernetes.io/projected/72613473-b58a-4d55-9fa2-2e751ee1312b-kube-api-access-xv5pt\") pod \"72613473-b58a-4d55-9fa2-2e751ee1312b\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.998834 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-client-ca\") pod \"72613473-b58a-4d55-9fa2-2e751ee1312b\" (UID: \"72613473-b58a-4d55-9fa2-2e751ee1312b\") " Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.998868 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-config\") pod \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " Feb 28 03:39:00 crc kubenswrapper[4624]: I0228 03:39:00.998900 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-proxy-ca-bundles\") pod \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:00.999931 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-config" (OuterVolumeSpecName: "config") pod "72613473-b58a-4d55-9fa2-2e751ee1312b" (UID: "72613473-b58a-4d55-9fa2-2e751ee1312b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.000285 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-client-ca" (OuterVolumeSpecName: "client-ca") pod "72613473-b58a-4d55-9fa2-2e751ee1312b" (UID: "72613473-b58a-4d55-9fa2-2e751ee1312b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.000824 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-config" (OuterVolumeSpecName: "config") pod "6ea3687a-e119-425f-87cb-de1ef4ee14f1" (UID: "6ea3687a-e119-425f-87cb-de1ef4ee14f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.001295 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6ea3687a-e119-425f-87cb-de1ef4ee14f1" (UID: "6ea3687a-e119-425f-87cb-de1ef4ee14f1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.005409 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72613473-b58a-4d55-9fa2-2e751ee1312b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72613473-b58a-4d55-9fa2-2e751ee1312b" (UID: "72613473-b58a-4d55-9fa2-2e751ee1312b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.007338 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72613473-b58a-4d55-9fa2-2e751ee1312b-kube-api-access-xv5pt" (OuterVolumeSpecName: "kube-api-access-xv5pt") pod "72613473-b58a-4d55-9fa2-2e751ee1312b" (UID: "72613473-b58a-4d55-9fa2-2e751ee1312b"). InnerVolumeSpecName "kube-api-access-xv5pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.099943 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-client-ca\") pod \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100012 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hclx\" (UniqueName: \"kubernetes.io/projected/6ea3687a-e119-425f-87cb-de1ef4ee14f1-kube-api-access-5hclx\") pod \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100063 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea3687a-e119-425f-87cb-de1ef4ee14f1-serving-cert\") pod \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\" (UID: \"6ea3687a-e119-425f-87cb-de1ef4ee14f1\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100383 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100407 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100417 4624 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100430 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72613473-b58a-4d55-9fa2-2e751ee1312b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100440 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72613473-b58a-4d55-9fa2-2e751ee1312b-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100450 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv5pt\" (UniqueName: \"kubernetes.io/projected/72613473-b58a-4d55-9fa2-2e751ee1312b-kube-api-access-xv5pt\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.100577 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ea3687a-e119-425f-87cb-de1ef4ee14f1" (UID: "6ea3687a-e119-425f-87cb-de1ef4ee14f1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.103456 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea3687a-e119-425f-87cb-de1ef4ee14f1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ea3687a-e119-425f-87cb-de1ef4ee14f1" (UID: "6ea3687a-e119-425f-87cb-de1ef4ee14f1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.103464 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea3687a-e119-425f-87cb-de1ef4ee14f1-kube-api-access-5hclx" (OuterVolumeSpecName: "kube-api-access-5hclx") pod "6ea3687a-e119-425f-87cb-de1ef4ee14f1" (UID: "6ea3687a-e119-425f-87cb-de1ef4ee14f1"). InnerVolumeSpecName "kube-api-access-5hclx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.158187 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.201890 4624 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea3687a-e119-425f-87cb-de1ef4ee14f1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.201929 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hclx\" (UniqueName: \"kubernetes.io/projected/6ea3687a-e119-425f-87cb-de1ef4ee14f1-kube-api-access-5hclx\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.201945 4624 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea3687a-e119-425f-87cb-de1ef4ee14f1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.233198 4624 generic.go:334] "Generic (PLEG): container finished" podID="72613473-b58a-4d55-9fa2-2e751ee1312b" containerID="60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a" exitCode=0 Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.233285 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" event={"ID":"72613473-b58a-4d55-9fa2-2e751ee1312b","Type":"ContainerDied","Data":"60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.233323 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" event={"ID":"72613473-b58a-4d55-9fa2-2e751ee1312b","Type":"ContainerDied","Data":"71291d97d37e72001ee643780cb388f46bd3e930fab96f14a69df6c9e0fefcbb"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.233346 4624 scope.go:117] "RemoveContainer" containerID="60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.233492 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.240935 4624 generic.go:334] "Generic (PLEG): container finished" podID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerID="361e731455777467df96c6f71224946022fac1249be246873e7ea4cc9ff11e66" exitCode=0 Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.241014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerDied","Data":"361e731455777467df96c6f71224946022fac1249be246873e7ea4cc9ff11e66"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.243762 4624 generic.go:334] "Generic (PLEG): container finished" podID="6ea3687a-e119-425f-87cb-de1ef4ee14f1" containerID="54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3" exitCode=0 Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.243850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" event={"ID":"6ea3687a-e119-425f-87cb-de1ef4ee14f1","Type":"ContainerDied","Data":"54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.243903 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.243907 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fdcb99c-44lth" event={"ID":"6ea3687a-e119-425f-87cb-de1ef4ee14f1","Type":"ContainerDied","Data":"a96de88f699bade54332df50b3b18b4996d8990ea3897259ff2ce7940e80a36e"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.247153 4624 generic.go:334] "Generic (PLEG): container finished" podID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerID="2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b" exitCode=0 Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.247214 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5lxr" event={"ID":"39526829-389b-49e1-8a31-5fee6a4ffa8f","Type":"ContainerDied","Data":"2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.247250 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r5lxr" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.247255 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r5lxr" event={"ID":"39526829-389b-49e1-8a31-5fee6a4ffa8f","Type":"ContainerDied","Data":"e9164fdeb16f2fd25518c698d935bdb57e7a251230ff4c7c94c47c2bb2f9f438"} Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.255840 4624 scope.go:117] "RemoveContainer" containerID="60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.256393 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a\": container with ID starting with 60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a not found: ID does not exist" containerID="60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.256446 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a"} err="failed to get container status \"60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a\": rpc error: code = NotFound desc = could not find container \"60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a\": container with ID starting with 60edf750fb9cca988a5f3224854f76882905c7a99a1779a8e354a66d9f91ff1a not found: ID does not exist" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.256563 4624 scope.go:117] "RemoveContainer" containerID="54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.281309 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.285035 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cfd9b9c67-4qwp5"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.295012 4624 scope.go:117] "RemoveContainer" containerID="54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.295677 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3\": container with ID starting with 54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3 not found: ID does not exist" containerID="54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.295723 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3"} err="failed to get container status \"54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3\": rpc error: code = NotFound desc = could not find container \"54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3\": container with ID starting with 54404c25179f640a7b0b46d82d44f9defa594ccf6263e05c925d4a0f425665d3 not found: ID does not exist" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.295757 4624 scope.go:117] "RemoveContainer" containerID="2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.297803 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fdcb99c-44lth"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.303393 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-catalog-content\") pod \"39526829-389b-49e1-8a31-5fee6a4ffa8f\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.303481 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-utilities\") pod \"39526829-389b-49e1-8a31-5fee6a4ffa8f\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.303575 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxdpz\" (UniqueName: \"kubernetes.io/projected/39526829-389b-49e1-8a31-5fee6a4ffa8f-kube-api-access-pxdpz\") pod \"39526829-389b-49e1-8a31-5fee6a4ffa8f\" (UID: \"39526829-389b-49e1-8a31-5fee6a4ffa8f\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.305392 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-utilities" (OuterVolumeSpecName: "utilities") pod "39526829-389b-49e1-8a31-5fee6a4ffa8f" (UID: "39526829-389b-49e1-8a31-5fee6a4ffa8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.305448 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67fdcb99c-44lth"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.308002 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.308670 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39526829-389b-49e1-8a31-5fee6a4ffa8f-kube-api-access-pxdpz" (OuterVolumeSpecName: "kube-api-access-pxdpz") pod "39526829-389b-49e1-8a31-5fee6a4ffa8f" (UID: "39526829-389b-49e1-8a31-5fee6a4ffa8f"). InnerVolumeSpecName "kube-api-access-pxdpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.314406 4624 scope.go:117] "RemoveContainer" containerID="733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.333584 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39526829-389b-49e1-8a31-5fee6a4ffa8f" (UID: "39526829-389b-49e1-8a31-5fee6a4ffa8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.342936 4624 scope.go:117] "RemoveContainer" containerID="12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.384300 4624 scope.go:117] "RemoveContainer" containerID="2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.385156 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b\": container with ID starting with 2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b not found: ID does not exist" containerID="2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.385224 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b"} err="failed to get container status \"2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b\": rpc error: code = NotFound desc = could not find container \"2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b\": container with ID starting with 2d5ba466827491042c8be38528598cf353a0e5b425682ee7988b95a0aeeadc8b not found: ID does not exist" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.385295 4624 scope.go:117] "RemoveContainer" containerID="733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.385744 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706\": container with ID starting with 733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706 not found: ID does not exist" containerID="733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.385799 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706"} err="failed to get container status \"733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706\": rpc error: code = NotFound desc = could not find container \"733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706\": container with ID starting with 733be7afae81fff2ff37a87800735cd94d29f793517c73229907b39bfbae6706 not found: ID does not exist" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.385840 4624 scope.go:117] "RemoveContainer" containerID="12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.386217 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b\": container with ID starting with 12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b not found: ID does not exist" containerID="12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.386257 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b"} err="failed to get container status \"12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b\": rpc error: code = NotFound desc = could not find container \"12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b\": container with ID starting with 12ef66c041f3507993e497e095e7fb1acd85c66eb19d56ceef8834c49dc37c2b not found: ID does not exist" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.405942 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-utilities\") pod \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.406023 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48dcb\" (UniqueName: \"kubernetes.io/projected/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-kube-api-access-48dcb\") pod \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.406062 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-catalog-content\") pod \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\" (UID: \"02145e1a-bf6e-41a9-ac4c-a8fa7b186414\") " Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.406282 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.406300 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxdpz\" (UniqueName: \"kubernetes.io/projected/39526829-389b-49e1-8a31-5fee6a4ffa8f-kube-api-access-pxdpz\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.406311 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39526829-389b-49e1-8a31-5fee6a4ffa8f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.407258 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-utilities" (OuterVolumeSpecName: "utilities") pod "02145e1a-bf6e-41a9-ac4c-a8fa7b186414" (UID: "02145e1a-bf6e-41a9-ac4c-a8fa7b186414"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.410505 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-kube-api-access-48dcb" (OuterVolumeSpecName: "kube-api-access-48dcb") pod "02145e1a-bf6e-41a9-ac4c-a8fa7b186414" (UID: "02145e1a-bf6e-41a9-ac4c-a8fa7b186414"). InnerVolumeSpecName "kube-api-access-48dcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.507617 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.507691 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48dcb\" (UniqueName: \"kubernetes.io/projected/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-kube-api-access-48dcb\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.582133 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02145e1a-bf6e-41a9-ac4c-a8fa7b186414" (UID: "02145e1a-bf6e-41a9-ac4c-a8fa7b186414"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.587795 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5lxr"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.592744 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r5lxr"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.609317 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02145e1a-bf6e-41a9-ac4c-a8fa7b186414-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861024 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f97d67975-r9rjn"] Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861710 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861726 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861737 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861744 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861759 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861766 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861778 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72613473-b58a-4d55-9fa2-2e751ee1312b" containerName="route-controller-manager" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861786 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="72613473-b58a-4d55-9fa2-2e751ee1312b" containerName="route-controller-manager" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861799 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861806 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861815 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861821 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861831 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861839 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861848 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861857 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="extract-utilities" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861865 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861872 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861881 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861889 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861898 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861904 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861912 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861918 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="extract-content" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861926 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861932 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: E0228 03:39:01.861946 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea3687a-e119-425f-87cb-de1ef4ee14f1" containerName="controller-manager" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.861953 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea3687a-e119-425f-87cb-de1ef4ee14f1" containerName="controller-manager" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862070 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="72613473-b58a-4d55-9fa2-2e751ee1312b" containerName="route-controller-manager" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862097 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862105 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a269916-9894-4dcf-99db-7df5a1791898" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862116 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ea3687a-e119-425f-87cb-de1ef4ee14f1" containerName="controller-manager" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862127 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8aeb46-02be-4b30-abdb-7c378da509ba" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862134 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" containerName="registry-server" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.862646 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.868770 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.869444 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.869745 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.869823 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.870058 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.875079 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.879479 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.882454 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.884664 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.884537 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.887338 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f97d67975-r9rjn"] Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.887700 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.890332 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.890521 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.890950 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.890963 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 03:39:01 crc kubenswrapper[4624]: I0228 03:39:01.891636 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026166 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-config\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026254 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15733fef-9371-4d87-919d-54841fb4719c-config\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026343 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8cb\" (UniqueName: \"kubernetes.io/projected/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-kube-api-access-qr8cb\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026430 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15733fef-9371-4d87-919d-54841fb4719c-serving-cert\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-serving-cert\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026519 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkkf\" (UniqueName: \"kubernetes.io/projected/15733fef-9371-4d87-919d-54841fb4719c-kube-api-access-xtkkf\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-proxy-ca-bundles\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026577 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-client-ca\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.026601 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15733fef-9371-4d87-919d-54841fb4719c-client-ca\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.099730 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39526829-389b-49e1-8a31-5fee6a4ffa8f" path="/var/lib/kubelet/pods/39526829-389b-49e1-8a31-5fee6a4ffa8f/volumes" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.101494 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea3687a-e119-425f-87cb-de1ef4ee14f1" path="/var/lib/kubelet/pods/6ea3687a-e119-425f-87cb-de1ef4ee14f1/volumes" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.102671 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72613473-b58a-4d55-9fa2-2e751ee1312b" path="/var/lib/kubelet/pods/72613473-b58a-4d55-9fa2-2e751ee1312b/volumes" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.104672 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a269916-9894-4dcf-99db-7df5a1791898" path="/var/lib/kubelet/pods/9a269916-9894-4dcf-99db-7df5a1791898/volumes" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128774 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkkf\" (UniqueName: \"kubernetes.io/projected/15733fef-9371-4d87-919d-54841fb4719c-kube-api-access-xtkkf\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128844 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-proxy-ca-bundles\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128884 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15733fef-9371-4d87-919d-54841fb4719c-client-ca\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128907 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-client-ca\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128929 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-config\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128958 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15733fef-9371-4d87-919d-54841fb4719c-config\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.128993 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8cb\" (UniqueName: \"kubernetes.io/projected/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-kube-api-access-qr8cb\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.129039 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-serving-cert\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.129062 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15733fef-9371-4d87-919d-54841fb4719c-serving-cert\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.131418 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-client-ca\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.132952 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15733fef-9371-4d87-919d-54841fb4719c-client-ca\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.133777 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-proxy-ca-bundles\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.134110 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15733fef-9371-4d87-919d-54841fb4719c-config\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.134736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-config\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.150030 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-serving-cert\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.154311 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15733fef-9371-4d87-919d-54841fb4719c-serving-cert\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.157652 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkkf\" (UniqueName: \"kubernetes.io/projected/15733fef-9371-4d87-919d-54841fb4719c-kube-api-access-xtkkf\") pod \"route-controller-manager-d88d9fbf-2w78j\" (UID: \"15733fef-9371-4d87-919d-54841fb4719c\") " pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.162558 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8cb\" (UniqueName: \"kubernetes.io/projected/a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d-kube-api-access-qr8cb\") pod \"controller-manager-6f97d67975-r9rjn\" (UID: \"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d\") " pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.225037 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.237909 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.271796 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgmt8" event={"ID":"02145e1a-bf6e-41a9-ac4c-a8fa7b186414","Type":"ContainerDied","Data":"2d85e4d49297582c6b5c25826fa51a56a6066ea1691f27e4d9e2b4c4bbce5d92"} Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.271859 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgmt8" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.271904 4624 scope.go:117] "RemoveContainer" containerID="361e731455777467df96c6f71224946022fac1249be246873e7ea4cc9ff11e66" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.301395 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kgmt8"] Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.306906 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kgmt8"] Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.308535 4624 scope.go:117] "RemoveContainer" containerID="6fc164f32767a348e385160ee2f32cdb74fc0978ca2a0297033e69a04da45eb2" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.349433 4624 scope.go:117] "RemoveContainer" containerID="b602e5bf4ee47eeb456fe3ad3af7973eb8735d9188388235ecc79a2194e68a91" Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.495477 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j"] Feb 28 03:39:02 crc kubenswrapper[4624]: I0228 03:39:02.610666 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f97d67975-r9rjn"] Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.291255 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" event={"ID":"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d","Type":"ContainerStarted","Data":"5b8df309fb951b6d2a731d99f3d59fb70598f5b69404301fcb43fa5228d83992"} Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.291312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" event={"ID":"a81e2ce7-e930-4adb-8dc1-b33e8f1fce5d","Type":"ContainerStarted","Data":"a517fe816c73a00dd17a3a641fb39be592ed179c817c72e39a0ad23b58e38f15"} Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.292409 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.294544 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" event={"ID":"15733fef-9371-4d87-919d-54841fb4719c","Type":"ContainerStarted","Data":"0b7b471ef58c9f853d7aef20948086b9bfe0f7954201bc75cb5be5a56933ff76"} Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.294573 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" event={"ID":"15733fef-9371-4d87-919d-54841fb4719c","Type":"ContainerStarted","Data":"d0ab70e76131d745fddffbe0c60fca4ddd1479347495ddc2ce65a2f4921df7e3"} Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.294871 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.298667 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.304110 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" Feb 28 03:39:03 crc kubenswrapper[4624]: I0228 03:39:03.310178 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f97d67975-r9rjn" podStartSLOduration=3.310159555 podStartE2EDuration="3.310159555s" podCreationTimestamp="2026-02-28 03:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:39:03.309141137 +0000 UTC m=+197.973180446" watchObservedRunningTime="2026-02-28 03:39:03.310159555 +0000 UTC m=+197.974198864" Feb 28 03:39:04 crc kubenswrapper[4624]: I0228 03:39:04.093207 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02145e1a-bf6e-41a9-ac4c-a8fa7b186414" path="/var/lib/kubelet/pods/02145e1a-bf6e-41a9-ac4c-a8fa7b186414/volumes" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.608815 4624 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.610144 4624 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.610333 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.610496 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f" gracePeriod=15 Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.610526 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446" gracePeriod=15 Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.610545 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca" gracePeriod=15 Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.610586 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1" gracePeriod=15 Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611146 4624 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611433 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611451 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611464 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611473 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611481 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611487 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611498 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611504 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611514 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611522 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611537 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611544 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611552 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611558 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611566 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611572 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611679 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611691 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611700 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611709 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611716 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611724 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611731 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611830 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611838 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: E0228 03:39:07.611847 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611863 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611945 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.611957 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.612311 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89" gracePeriod=15 Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.617537 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.617885 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.618054 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.651780 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" podStartSLOduration=7.651757809 podStartE2EDuration="7.651757809s" podCreationTimestamp="2026-02-28 03:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:39:03.371213956 +0000 UTC m=+198.035253265" watchObservedRunningTime="2026-02-28 03:39:07.651757809 +0000 UTC m=+202.315797108" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.654278 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.722885 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.722956 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.722993 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723018 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723128 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723154 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723222 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723268 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.723297 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824156 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824226 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824257 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824289 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824331 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824365 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824396 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824367 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.824802 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: I0228 03:39:07.956554 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:39:07 crc kubenswrapper[4624]: W0228 03:39:07.998732 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-50f84bcd946eca56a3b52f8e3111fc882dcb7de4947266d30903d65485490b32 WatchSource:0}: Error finding container 50f84bcd946eca56a3b52f8e3111fc882dcb7de4947266d30903d65485490b32: Status 404 returned error can't find the container with id 50f84bcd946eca56a3b52f8e3111fc882dcb7de4947266d30903d65485490b32 Feb 28 03:39:08 crc kubenswrapper[4624]: E0228 03:39:08.002699 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18984bf01bdb8d0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:39:08.001897743 +0000 UTC m=+202.665937052,LastTimestamp:2026-02-28 03:39:08.001897743 +0000 UTC m=+202.665937052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.327858 4624 generic.go:334] "Generic (PLEG): container finished" podID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" containerID="78ef91b035bcb58d965a489bdfa722f7d9013604ba2e139f75da3d572762dfb2" exitCode=0 Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.328119 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6fbb0bd0-131a-412e-abb0-040e4e5ebf10","Type":"ContainerDied","Data":"78ef91b035bcb58d965a489bdfa722f7d9013604ba2e139f75da3d572762dfb2"} Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.329309 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.329781 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.331052 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.332544 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.333165 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446" exitCode=0 Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.333262 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89" exitCode=0 Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.333341 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca" exitCode=0 Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.333414 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1" exitCode=2 Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.333288 4624 scope.go:117] "RemoveContainer" containerID="3f15f205351151000e2a56a9b53ab7cda164d14c601b8f2002d68bfd39781e3a" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.334819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5763e2fff08f4ec232b267f8835484715bce567b3438cd066f334361aeedc6c6"} Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.334868 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"50f84bcd946eca56a3b52f8e3111fc882dcb7de4947266d30903d65485490b32"} Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.335492 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:08 crc kubenswrapper[4624]: I0228 03:39:08.335945 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.347949 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 03:39:09 crc kubenswrapper[4624]: E0228 03:39:09.400025 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18984bf01bdb8d0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:39:08.001897743 +0000 UTC m=+202.665937052,LastTimestamp:2026-02-28 03:39:08.001897743 +0000 UTC m=+202.665937052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.744020 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.744551 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.744929 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.854114 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kube-api-access\") pod \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.854168 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kubelet-dir\") pod \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.854249 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-var-lock\") pod \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\" (UID: \"6fbb0bd0-131a-412e-abb0-040e4e5ebf10\") " Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.854404 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-var-lock" (OuterVolumeSpecName: "var-lock") pod "6fbb0bd0-131a-412e-abb0-040e4e5ebf10" (UID: "6fbb0bd0-131a-412e-abb0-040e4e5ebf10"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.854412 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6fbb0bd0-131a-412e-abb0-040e4e5ebf10" (UID: "6fbb0bd0-131a-412e-abb0-040e4e5ebf10"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.902061 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6fbb0bd0-131a-412e-abb0-040e4e5ebf10" (UID: "6fbb0bd0-131a-412e-abb0-040e4e5ebf10"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.955355 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.955396 4624 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:09 crc kubenswrapper[4624]: I0228 03:39:09.955405 4624 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6fbb0bd0-131a-412e-abb0-040e4e5ebf10-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.089671 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.091278 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.091948 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.092548 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.093190 4624 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.259709 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.259842 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.260039 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.260139 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.260165 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.260240 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.262073 4624 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.262121 4624 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.262135 4624 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.359289 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.361433 4624 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f" exitCode=0 Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.361533 4624 scope.go:117] "RemoveContainer" containerID="6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.361723 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.363558 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.364301 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.365077 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6fbb0bd0-131a-412e-abb0-040e4e5ebf10","Type":"ContainerDied","Data":"0e1d87e4cbd5567396c2fc470c9030cdd3b24b0aab5764923dec3c41ca07939b"} Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.365254 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1d87e4cbd5567396c2fc470c9030cdd3b24b0aab5764923dec3c41ca07939b" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.365320 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.365297 4624 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.369605 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.369931 4624 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.370270 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.377555 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.378074 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.378630 4624 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.384467 4624 scope.go:117] "RemoveContainer" containerID="c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.400219 4624 scope.go:117] "RemoveContainer" containerID="7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.414907 4624 scope.go:117] "RemoveContainer" containerID="e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.436964 4624 scope.go:117] "RemoveContainer" containerID="1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.461116 4624 scope.go:117] "RemoveContainer" containerID="941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.497186 4624 scope.go:117] "RemoveContainer" containerID="6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446" Feb 28 03:39:10 crc kubenswrapper[4624]: E0228 03:39:10.497995 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446\": container with ID starting with 6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446 not found: ID does not exist" containerID="6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.498124 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446"} err="failed to get container status \"6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446\": rpc error: code = NotFound desc = could not find container \"6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446\": container with ID starting with 6f3aaf4be60206454783418e14bdad827a9885682e70280e67e68eae9ad2b446 not found: ID does not exist" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.498261 4624 scope.go:117] "RemoveContainer" containerID="c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89" Feb 28 03:39:10 crc kubenswrapper[4624]: E0228 03:39:10.498952 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\": container with ID starting with c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89 not found: ID does not exist" containerID="c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.499002 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89"} err="failed to get container status \"c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\": rpc error: code = NotFound desc = could not find container \"c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89\": container with ID starting with c0b631873e2bd6beafdf20ef4752ceac4efdb0a2454513bfe231907f3e1ffe89 not found: ID does not exist" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.499035 4624 scope.go:117] "RemoveContainer" containerID="7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca" Feb 28 03:39:10 crc kubenswrapper[4624]: E0228 03:39:10.499809 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\": container with ID starting with 7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca not found: ID does not exist" containerID="7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.499909 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca"} err="failed to get container status \"7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\": rpc error: code = NotFound desc = could not find container \"7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca\": container with ID starting with 7bf5fa8830b8870f307239247d11a3a37e5278406ae789479f5dfdb26bae6fca not found: ID does not exist" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.499996 4624 scope.go:117] "RemoveContainer" containerID="e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1" Feb 28 03:39:10 crc kubenswrapper[4624]: E0228 03:39:10.500740 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\": container with ID starting with e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1 not found: ID does not exist" containerID="e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.500815 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1"} err="failed to get container status \"e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\": rpc error: code = NotFound desc = could not find container \"e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1\": container with ID starting with e20c61433befd5a896839974446c473fe6d630281ed6cf9f7c9becb2ab07cfd1 not found: ID does not exist" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.500869 4624 scope.go:117] "RemoveContainer" containerID="1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f" Feb 28 03:39:10 crc kubenswrapper[4624]: E0228 03:39:10.501396 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\": container with ID starting with 1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f not found: ID does not exist" containerID="1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.501511 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f"} err="failed to get container status \"1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\": rpc error: code = NotFound desc = could not find container \"1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f\": container with ID starting with 1262b478a034732cd5d52a39fe5b19c1780e6051be8a1ff0ce287812b0f3812f not found: ID does not exist" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.501601 4624 scope.go:117] "RemoveContainer" containerID="941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601" Feb 28 03:39:10 crc kubenswrapper[4624]: E0228 03:39:10.502200 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\": container with ID starting with 941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601 not found: ID does not exist" containerID="941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601" Feb 28 03:39:10 crc kubenswrapper[4624]: I0228 03:39:10.502328 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601"} err="failed to get container status \"941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\": rpc error: code = NotFound desc = could not find container \"941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601\": container with ID starting with 941579f0bf8864fccc3167230c0b297c651fbe439b9a353d4fee4fe2c1f4d601 not found: ID does not exist" Feb 28 03:39:12 crc kubenswrapper[4624]: I0228 03:39:12.095034 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.551616 4624 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.552906 4624 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.553408 4624 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.553776 4624 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.554277 4624 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:15 crc kubenswrapper[4624]: I0228 03:39:15.554335 4624 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.554801 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Feb 28 03:39:15 crc kubenswrapper[4624]: E0228 03:39:15.756415 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Feb 28 03:39:16 crc kubenswrapper[4624]: I0228 03:39:16.092875 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:16 crc kubenswrapper[4624]: I0228 03:39:16.094384 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:16 crc kubenswrapper[4624]: E0228 03:39:16.157880 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Feb 28 03:39:16 crc kubenswrapper[4624]: E0228 03:39:16.958743 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Feb 28 03:39:18 crc kubenswrapper[4624]: E0228 03:39:18.559950 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Feb 28 03:39:19 crc kubenswrapper[4624]: E0228 03:39:19.401715 4624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18984bf01bdb8d0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 03:39:08.001897743 +0000 UTC m=+202.665937052,LastTimestamp:2026-02-28 03:39:08.001897743 +0000 UTC m=+202.665937052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 03:39:19 crc kubenswrapper[4624]: I0228 03:39:19.540714 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:39:19 crc kubenswrapper[4624]: I0228 03:39:19.540823 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:39:20 crc kubenswrapper[4624]: E0228 03:39:20.121246 4624 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" volumeName="registry-storage" Feb 28 03:39:21 crc kubenswrapper[4624]: E0228 03:39:21.761496 4624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="6.4s" Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.461417 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.463025 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.463149 4624 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ff98f82a375ad434c6c79d25921178c941462da96f5d64d615b63202ebde82fc" exitCode=1 Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.463207 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ff98f82a375ad434c6c79d25921178c941462da96f5d64d615b63202ebde82fc"} Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.464046 4624 scope.go:117] "RemoveContainer" containerID="ff98f82a375ad434c6c79d25921178c941462da96f5d64d615b63202ebde82fc" Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.464673 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.465453 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:22 crc kubenswrapper[4624]: I0228 03:39:22.468116 4624 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.086906 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.088341 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.089187 4624 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.089949 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.112112 4624 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.112145 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:23 crc kubenswrapper[4624]: E0228 03:39:23.112744 4624 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.113611 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:23 crc kubenswrapper[4624]: W0228 03:39:23.147228 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6aa9f57cec25da8e27f384929fd9a15772b18d8f7240df1892fc8aec072f72f3 WatchSource:0}: Error finding container 6aa9f57cec25da8e27f384929fd9a15772b18d8f7240df1892fc8aec072f72f3: Status 404 returned error can't find the container with id 6aa9f57cec25da8e27f384929fd9a15772b18d8f7240df1892fc8aec072f72f3 Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.480435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d37d282be8532d703ed433fe0d09675c79bdafb2db994c6c63752b8a14b04fd"} Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.480518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6aa9f57cec25da8e27f384929fd9a15772b18d8f7240df1892fc8aec072f72f3"} Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.480854 4624 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.480878 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.481251 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: E0228 03:39:23.481643 4624 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.482310 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.482735 4624 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.484982 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.485619 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.485665 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c95e49d27aeb5f17429b63b64b5b659a6e1b2fdc4f1921b5268e2b58180e539"} Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.486466 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.486651 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.487021 4624 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.823579 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.828575 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.829466 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.830286 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:23 crc kubenswrapper[4624]: I0228 03:39:23.831162 4624 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.497121 4624 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8d37d282be8532d703ed433fe0d09675c79bdafb2db994c6c63752b8a14b04fd" exitCode=0 Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.498550 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8d37d282be8532d703ed433fe0d09675c79bdafb2db994c6c63752b8a14b04fd"} Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.498603 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.498919 4624 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.498953 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.500245 4624 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:24 crc kubenswrapper[4624]: E0228 03:39:24.500670 4624 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.501063 4624 status_manager.go:851] "Failed to get status for pod" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:24 crc kubenswrapper[4624]: I0228 03:39:24.501801 4624 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Feb 28 03:39:25 crc kubenswrapper[4624]: I0228 03:39:25.505051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b81e2a202c006ebe795bc57076022f9e27823ec08332ef680164457e8eea6e40"} Feb 28 03:39:25 crc kubenswrapper[4624]: I0228 03:39:25.505541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f41ef68e4b6b1cdf3d93a5e0507eebef8f7e1c894ccc99eb0cd193ad14d010e"} Feb 28 03:39:25 crc kubenswrapper[4624]: I0228 03:39:25.505553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6ec6bbdebbf3ea74b8b37968178a00536f56b725676ba87143bc0c2662325ef0"} Feb 28 03:39:26 crc kubenswrapper[4624]: I0228 03:39:26.513732 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1acc36eb2bebdf521b35afb0f3c5ef922da1e9995a9fba8c2cbd0378d76851f"} Feb 28 03:39:26 crc kubenswrapper[4624]: I0228 03:39:26.514119 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0d2ea67d0fdb2e940f8d647570179b955cc23dba7d69dc32480ecdf3a9e47c92"} Feb 28 03:39:26 crc kubenswrapper[4624]: I0228 03:39:26.514150 4624 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:26 crc kubenswrapper[4624]: I0228 03:39:26.514180 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:28 crc kubenswrapper[4624]: I0228 03:39:28.114150 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:28 crc kubenswrapper[4624]: I0228 03:39:28.114606 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:28 crc kubenswrapper[4624]: I0228 03:39:28.118719 4624 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]log ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]etcd ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/generic-apiserver-start-informers ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/priority-and-fairness-filter ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-apiextensions-informers ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-apiextensions-controllers ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/crd-informer-synced ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-system-namespaces-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 28 03:39:28 crc kubenswrapper[4624]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/bootstrap-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/start-kube-aggregator-informers ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-registration-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-discovery-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]autoregister-completion ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-openapi-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 28 03:39:28 crc kubenswrapper[4624]: livez check failed Feb 28 03:39:28 crc kubenswrapper[4624]: I0228 03:39:28.118791 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 03:39:31 crc kubenswrapper[4624]: I0228 03:39:31.876465 4624 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:31 crc kubenswrapper[4624]: I0228 03:39:31.894530 4624 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c078260e-e0bc-44c2-bb84-5008e709b848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:39:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:39:24Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:39:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T03:39:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d37d282be8532d703ed433fe0d09675c79bdafb2db994c6c63752b8a14b04fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d37d282be8532d703ed433fe0d09675c79bdafb2db994c6c63752b8a14b04fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T03:39:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T03:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Feb 28 03:39:32 crc kubenswrapper[4624]: I0228 03:39:32.005176 4624 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ed92d0e0-2e90-438a-8bac-6471dcb3869f" Feb 28 03:39:32 crc kubenswrapper[4624]: I0228 03:39:32.562973 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:32 crc kubenswrapper[4624]: I0228 03:39:32.563556 4624 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:32 crc kubenswrapper[4624]: I0228 03:39:32.563596 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:32 crc kubenswrapper[4624]: I0228 03:39:32.567253 4624 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ed92d0e0-2e90-438a-8bac-6471dcb3869f" Feb 28 03:39:33 crc kubenswrapper[4624]: I0228 03:39:33.570418 4624 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:33 crc kubenswrapper[4624]: I0228 03:39:33.571074 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c078260e-e0bc-44c2-bb84-5008e709b848" Feb 28 03:39:33 crc kubenswrapper[4624]: I0228 03:39:33.574808 4624 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ed92d0e0-2e90-438a-8bac-6471dcb3869f" Feb 28 03:39:36 crc kubenswrapper[4624]: I0228 03:39:36.540267 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 03:39:41 crc kubenswrapper[4624]: I0228 03:39:41.659757 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 03:39:41 crc kubenswrapper[4624]: I0228 03:39:41.770877 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 03:39:41 crc kubenswrapper[4624]: I0228 03:39:41.885736 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 03:39:42 crc kubenswrapper[4624]: I0228 03:39:42.204820 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 03:39:42 crc kubenswrapper[4624]: I0228 03:39:42.387881 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 03:39:42 crc kubenswrapper[4624]: I0228 03:39:42.708150 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 03:39:42 crc kubenswrapper[4624]: I0228 03:39:42.869071 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.110125 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.294558 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.322958 4624 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.323854 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.323824906 podStartE2EDuration="36.323824906s" podCreationTimestamp="2026-02-28 03:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:39:31.93524207 +0000 UTC m=+226.599281379" watchObservedRunningTime="2026-02-28 03:39:43.323824906 +0000 UTC m=+237.987864255" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.331497 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.331585 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.338608 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.371695 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.37164893 podStartE2EDuration="12.37164893s" podCreationTimestamp="2026-02-28 03:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:39:43.370837178 +0000 UTC m=+238.034876527" watchObservedRunningTime="2026-02-28 03:39:43.37164893 +0000 UTC m=+238.035688249" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.522325 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.609796 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.714269 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 03:39:43 crc kubenswrapper[4624]: I0228 03:39:43.818286 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.044927 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.241963 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.319057 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.322450 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.502537 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.584785 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.628125 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.682256 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.783697 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.856942 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 03:39:44 crc kubenswrapper[4624]: I0228 03:39:44.945967 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.046815 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.133300 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.245751 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.309911 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.371172 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.386472 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.402928 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.405433 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.424844 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.552227 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.572175 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.602230 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.671963 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.808117 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.966856 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 03:39:45 crc kubenswrapper[4624]: I0228 03:39:45.972497 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.056679 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.060150 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.080466 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.229385 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.343122 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.448150 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.468875 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.516782 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.533551 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.622189 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.626073 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.631966 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.747934 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.937437 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.942036 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 03:39:46 crc kubenswrapper[4624]: I0228 03:39:46.975306 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.058826 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.186575 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.195998 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.211431 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.255510 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.288129 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.400321 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.449632 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.529133 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.724295 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.757000 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.769504 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.903978 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.944014 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 03:39:47 crc kubenswrapper[4624]: I0228 03:39:47.947691 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.120519 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.125444 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.131006 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.191809 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.198490 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.273996 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.386642 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.416542 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.416654 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.524842 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.614629 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.633196 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.636067 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.698188 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.702317 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.704328 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.704552 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.778980 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.812209 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.821633 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.942103 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.997232 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 03:39:48 crc kubenswrapper[4624]: I0228 03:39:48.999029 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.008403 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.078871 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.109813 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.251298 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.268672 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.375049 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.502389 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.514110 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.540664 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.540732 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.584469 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.681571 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.820544 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 03:39:49 crc kubenswrapper[4624]: I0228 03:39:49.920537 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.067189 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.073484 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.073496 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.121440 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.181756 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.210897 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.230564 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.276297 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.329981 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.366543 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.378554 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.500594 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.538719 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.591778 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.608679 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.670338 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.750220 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.766916 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.821174 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.870583 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.903225 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.941318 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.942591 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.947832 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 03:39:50 crc kubenswrapper[4624]: I0228 03:39:50.977016 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.057141 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.147649 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.162208 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.261299 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.391493 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.415533 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.449533 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.462616 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.484736 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.545670 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.567425 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.651264 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.749001 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.751209 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.856688 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.872136 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.883789 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.951871 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 03:39:51 crc kubenswrapper[4624]: I0228 03:39:51.969404 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.003977 4624 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.099656 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.102420 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.134777 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.235231 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.331023 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.333411 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.355650 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.430153 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.434544 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.446332 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.458041 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.495500 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.516298 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.726601 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.771510 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.784613 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.876465 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.933230 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 03:39:52 crc kubenswrapper[4624]: I0228 03:39:52.971583 4624 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.010533 4624 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.159537 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.183011 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.218577 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.268270 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.297309 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.393728 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.414945 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.437191 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.466305 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.540178 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.545684 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.747489 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.759714 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.858393 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 03:39:53 crc kubenswrapper[4624]: I0228 03:39:53.860658 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.043386 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.076862 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.128254 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.186590 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.229618 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.256643 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.402470 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.457037 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.539776 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.556804 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.610729 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.630384 4624 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.630764 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5763e2fff08f4ec232b267f8835484715bce567b3438cd066f334361aeedc6c6" gracePeriod=5 Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.645920 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.755201 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.849721 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 03:39:54 crc kubenswrapper[4624]: I0228 03:39:54.990181 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.053481 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.312492 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.565432 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.653357 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.831776 4624 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.864477 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 03:39:55 crc kubenswrapper[4624]: I0228 03:39:55.916921 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.012053 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.093050 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.098886 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.293468 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.329560 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.423941 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 03:39:56 crc kubenswrapper[4624]: I0228 03:39:56.697615 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.006596 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.040345 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.050762 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.173028 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.299426 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.354343 4624 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.379515 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.641967 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.713371 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.814776 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 03:39:57 crc kubenswrapper[4624]: I0228 03:39:57.876063 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 03:39:59 crc kubenswrapper[4624]: I0228 03:39:59.800248 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 03:39:59 crc kubenswrapper[4624]: I0228 03:39:59.800782 4624 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5763e2fff08f4ec232b267f8835484715bce567b3438cd066f334361aeedc6c6" exitCode=137 Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.226005 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537500-rfqsj"] Feb 28 03:40:00 crc kubenswrapper[4624]: E0228 03:40:00.226281 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" containerName="installer" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.226295 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" containerName="installer" Feb 28 03:40:00 crc kubenswrapper[4624]: E0228 03:40:00.226314 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.226319 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.226422 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbb0bd0-131a-412e-abb0-040e4e5ebf10" containerName="installer" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.226439 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.226919 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.230112 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.230523 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.231637 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.234884 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.234978 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.260920 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537500-rfqsj"] Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.344865 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.344959 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345158 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345148 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345382 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345489 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345559 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345615 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345687 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.345984 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbjj\" (UniqueName: \"kubernetes.io/projected/d3585a2f-987d-439f-af8d-24734fd1c702-kube-api-access-prbjj\") pod \"auto-csr-approver-29537500-rfqsj\" (UID: \"d3585a2f-987d-439f-af8d-24734fd1c702\") " pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.346176 4624 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.346199 4624 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.346212 4624 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.346226 4624 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.356700 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.447694 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbjj\" (UniqueName: \"kubernetes.io/projected/d3585a2f-987d-439f-af8d-24734fd1c702-kube-api-access-prbjj\") pod \"auto-csr-approver-29537500-rfqsj\" (UID: \"d3585a2f-987d-439f-af8d-24734fd1c702\") " pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.447790 4624 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.465065 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbjj\" (UniqueName: \"kubernetes.io/projected/d3585a2f-987d-439f-af8d-24734fd1c702-kube-api-access-prbjj\") pod \"auto-csr-approver-29537500-rfqsj\" (UID: \"d3585a2f-987d-439f-af8d-24734fd1c702\") " pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.542974 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.809727 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.809820 4624 scope.go:117] "RemoveContainer" containerID="5763e2fff08f4ec232b267f8835484715bce567b3438cd066f334361aeedc6c6" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.809973 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 03:40:00 crc kubenswrapper[4624]: I0228 03:40:00.967629 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537500-rfqsj"] Feb 28 03:40:00 crc kubenswrapper[4624]: W0228 03:40:00.977795 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3585a2f_987d_439f_af8d_24734fd1c702.slice/crio-5e21359d1fe7add36dada8fd0c6b95b2488d8b015eb4bf7a383098392e2909bf WatchSource:0}: Error finding container 5e21359d1fe7add36dada8fd0c6b95b2488d8b015eb4bf7a383098392e2909bf: Status 404 returned error can't find the container with id 5e21359d1fe7add36dada8fd0c6b95b2488d8b015eb4bf7a383098392e2909bf Feb 28 03:40:01 crc kubenswrapper[4624]: I0228 03:40:01.819461 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" event={"ID":"d3585a2f-987d-439f-af8d-24734fd1c702","Type":"ContainerStarted","Data":"5e21359d1fe7add36dada8fd0c6b95b2488d8b015eb4bf7a383098392e2909bf"} Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.094416 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.094685 4624 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.109403 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.109452 4624 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ead05ee9-7f2b-457e-93a9-a32d643e1684" Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.114179 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.114239 4624 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ead05ee9-7f2b-457e-93a9-a32d643e1684" Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.830671 4624 generic.go:334] "Generic (PLEG): container finished" podID="d3585a2f-987d-439f-af8d-24734fd1c702" containerID="6ccbd3acbc14da951585b4b617d1a1bdb7cbd928e0ef0c1254fb6916fa591e8a" exitCode=0 Feb 28 03:40:02 crc kubenswrapper[4624]: I0228 03:40:02.832426 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" event={"ID":"d3585a2f-987d-439f-af8d-24734fd1c702","Type":"ContainerDied","Data":"6ccbd3acbc14da951585b4b617d1a1bdb7cbd928e0ef0c1254fb6916fa591e8a"} Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.145547 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.311041 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prbjj\" (UniqueName: \"kubernetes.io/projected/d3585a2f-987d-439f-af8d-24734fd1c702-kube-api-access-prbjj\") pod \"d3585a2f-987d-439f-af8d-24734fd1c702\" (UID: \"d3585a2f-987d-439f-af8d-24734fd1c702\") " Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.317991 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3585a2f-987d-439f-af8d-24734fd1c702-kube-api-access-prbjj" (OuterVolumeSpecName: "kube-api-access-prbjj") pod "d3585a2f-987d-439f-af8d-24734fd1c702" (UID: "d3585a2f-987d-439f-af8d-24734fd1c702"). InnerVolumeSpecName "kube-api-access-prbjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.412691 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prbjj\" (UniqueName: \"kubernetes.io/projected/d3585a2f-987d-439f-af8d-24734fd1c702-kube-api-access-prbjj\") on node \"crc\" DevicePath \"\"" Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.849235 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" event={"ID":"d3585a2f-987d-439f-af8d-24734fd1c702","Type":"ContainerDied","Data":"5e21359d1fe7add36dada8fd0c6b95b2488d8b015eb4bf7a383098392e2909bf"} Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.849289 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e21359d1fe7add36dada8fd0c6b95b2488d8b015eb4bf7a383098392e2909bf" Feb 28 03:40:04 crc kubenswrapper[4624]: I0228 03:40:04.849315 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537500-rfqsj" Feb 28 03:40:10 crc kubenswrapper[4624]: I0228 03:40:10.358623 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 03:40:11 crc kubenswrapper[4624]: I0228 03:40:11.110804 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 03:40:12 crc kubenswrapper[4624]: I0228 03:40:12.120931 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 03:40:12 crc kubenswrapper[4624]: I0228 03:40:12.520066 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 03:40:13 crc kubenswrapper[4624]: I0228 03:40:13.829592 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.462902 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.492397 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.652116 4624 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zmfld container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.652291 4624 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zmfld container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.652312 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.652191 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.922933 4624 generic.go:334] "Generic (PLEG): container finished" podID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerID="063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f" exitCode=0 Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.923103 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" event={"ID":"3912910a-bd9b-4b5d-a67a-c6929de727b9","Type":"ContainerDied","Data":"063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f"} Feb 28 03:40:14 crc kubenswrapper[4624]: I0228 03:40:14.924848 4624 scope.go:117] "RemoveContainer" containerID="063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f" Feb 28 03:40:15 crc kubenswrapper[4624]: I0228 03:40:15.263723 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 03:40:15 crc kubenswrapper[4624]: I0228 03:40:15.336485 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 03:40:15 crc kubenswrapper[4624]: I0228 03:40:15.782342 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 03:40:15 crc kubenswrapper[4624]: I0228 03:40:15.935497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" event={"ID":"3912910a-bd9b-4b5d-a67a-c6929de727b9","Type":"ContainerStarted","Data":"c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b"} Feb 28 03:40:15 crc kubenswrapper[4624]: I0228 03:40:15.937143 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:40:15 crc kubenswrapper[4624]: I0228 03:40:15.942437 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:40:16 crc kubenswrapper[4624]: I0228 03:40:16.207832 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 03:40:16 crc kubenswrapper[4624]: I0228 03:40:16.543347 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 03:40:18 crc kubenswrapper[4624]: I0228 03:40:18.183016 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.540008 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.540557 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.540669 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.541494 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.541567 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748" gracePeriod=600 Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.963205 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748" exitCode=0 Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.963654 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748"} Feb 28 03:40:19 crc kubenswrapper[4624]: I0228 03:40:19.963680 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"e168d945aae1509ec586eba77f4d6730480448e3bb4c6899a07204dd97ea7324"} Feb 28 03:40:20 crc kubenswrapper[4624]: I0228 03:40:20.983801 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 03:40:21 crc kubenswrapper[4624]: I0228 03:40:21.792426 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 03:40:23 crc kubenswrapper[4624]: I0228 03:40:23.824987 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 03:40:24 crc kubenswrapper[4624]: I0228 03:40:24.431139 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 03:40:26 crc kubenswrapper[4624]: I0228 03:40:26.296068 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 03:40:26 crc kubenswrapper[4624]: I0228 03:40:26.297632 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 03:40:27 crc kubenswrapper[4624]: I0228 03:40:27.492516 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 03:40:28 crc kubenswrapper[4624]: I0228 03:40:28.088750 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 03:40:29 crc kubenswrapper[4624]: I0228 03:40:29.910722 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 03:40:31 crc kubenswrapper[4624]: I0228 03:40:31.087590 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 03:40:31 crc kubenswrapper[4624]: I0228 03:40:31.169657 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 03:40:34 crc kubenswrapper[4624]: I0228 03:40:34.089787 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 03:40:34 crc kubenswrapper[4624]: I0228 03:40:34.823880 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 03:40:36 crc kubenswrapper[4624]: I0228 03:40:36.855119 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 03:40:59 crc kubenswrapper[4624]: I0228 03:40:59.952423 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zfllg"] Feb 28 03:40:59 crc kubenswrapper[4624]: E0228 03:40:59.953441 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3585a2f-987d-439f-af8d-24734fd1c702" containerName="oc" Feb 28 03:40:59 crc kubenswrapper[4624]: I0228 03:40:59.953457 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3585a2f-987d-439f-af8d-24734fd1c702" containerName="oc" Feb 28 03:40:59 crc kubenswrapper[4624]: I0228 03:40:59.953601 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3585a2f-987d-439f-af8d-24734fd1c702" containerName="oc" Feb 28 03:40:59 crc kubenswrapper[4624]: I0228 03:40:59.954163 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:40:59 crc kubenswrapper[4624]: I0228 03:40:59.975043 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zfllg"] Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.019727 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79d9v\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-kube-api-access-79d9v\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.019802 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.019844 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.019870 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-bound-sa-token\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.020135 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-registry-tls\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.020194 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-trusted-ca\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.020265 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.020296 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-registry-certificates\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.076277 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.121535 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-registry-tls\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.121598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-trusted-ca\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.121818 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.121848 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-registry-certificates\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.121889 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79d9v\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-kube-api-access-79d9v\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.121977 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.122058 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-bound-sa-token\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.122768 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.123345 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-trusted-ca\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.123909 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-registry-certificates\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.137391 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-registry-tls\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.137527 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.140226 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-bound-sa-token\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.143936 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79d9v\" (UniqueName: \"kubernetes.io/projected/c8719711-d81e-4efc-8a1f-d69b0a0ad2ce-kube-api-access-79d9v\") pod \"image-registry-66df7c8f76-zfllg\" (UID: \"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce\") " pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.271450 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:00 crc kubenswrapper[4624]: I0228 03:41:00.565822 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zfllg"] Feb 28 03:41:01 crc kubenswrapper[4624]: I0228 03:41:01.251219 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" event={"ID":"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce","Type":"ContainerStarted","Data":"7ad745e3d873c889782eaf3382d30b1a4f7feee59a91dd76035a6a15177c6087"} Feb 28 03:41:01 crc kubenswrapper[4624]: I0228 03:41:01.251685 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:01 crc kubenswrapper[4624]: I0228 03:41:01.251701 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" event={"ID":"c8719711-d81e-4efc-8a1f-d69b0a0ad2ce","Type":"ContainerStarted","Data":"ef9e5e879186b362c639c826933e424b5016d1eff89c18de1d11299e967a5875"} Feb 28 03:41:01 crc kubenswrapper[4624]: I0228 03:41:01.282001 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" podStartSLOduration=2.281971802 podStartE2EDuration="2.281971802s" podCreationTimestamp="2026-02-28 03:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:41:01.275776946 +0000 UTC m=+315.939816265" watchObservedRunningTime="2026-02-28 03:41:01.281971802 +0000 UTC m=+315.946011121" Feb 28 03:41:20 crc kubenswrapper[4624]: I0228 03:41:20.278624 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zfllg" Feb 28 03:41:20 crc kubenswrapper[4624]: I0228 03:41:20.363737 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzbdn"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.679727 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n698"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.682196 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2n698" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="registry-server" containerID="cri-o://c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c" gracePeriod=30 Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.698422 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjcv7"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.699188 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jjcv7" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="registry-server" containerID="cri-o://fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce" gracePeriod=30 Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.716959 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zmfld"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.718625 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" containerID="cri-o://c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b" gracePeriod=30 Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.728415 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2pjq"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.728689 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v2pjq" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="registry-server" containerID="cri-o://243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4" gracePeriod=30 Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.755282 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x54xc"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.755657 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x54xc" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="registry-server" containerID="cri-o://cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2" gracePeriod=30 Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.762474 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m6bvr"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.763572 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.778173 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m6bvr"] Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.956587 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08a27942-dc8c-4905-b3d3-7202aae79787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.956736 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnslr\" (UniqueName: \"kubernetes.io/projected/08a27942-dc8c-4905-b3d3-7202aae79787-kube-api-access-lnslr\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:28 crc kubenswrapper[4624]: I0228 03:41:28.956765 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08a27942-dc8c-4905-b3d3-7202aae79787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.058876 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnslr\" (UniqueName: \"kubernetes.io/projected/08a27942-dc8c-4905-b3d3-7202aae79787-kube-api-access-lnslr\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.058959 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08a27942-dc8c-4905-b3d3-7202aae79787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.059047 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08a27942-dc8c-4905-b3d3-7202aae79787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.061504 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08a27942-dc8c-4905-b3d3-7202aae79787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.084100 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnslr\" (UniqueName: \"kubernetes.io/projected/08a27942-dc8c-4905-b3d3-7202aae79787-kube-api-access-lnslr\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.087711 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/08a27942-dc8c-4905-b3d3-7202aae79787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m6bvr\" (UID: \"08a27942-dc8c-4905-b3d3-7202aae79787\") " pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.110841 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.201473 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.202609 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.241069 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.266882 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-catalog-content\") pod \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.267010 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj2jm\" (UniqueName: \"kubernetes.io/projected/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-kube-api-access-bj2jm\") pod \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.267075 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-utilities\") pod \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\" (UID: \"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.267952 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-utilities" (OuterVolumeSpecName: "utilities") pod "8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" (UID: "8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.268278 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.275927 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-kube-api-access-bj2jm" (OuterVolumeSpecName: "kube-api-access-bj2jm") pod "8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" (UID: "8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f"). InnerVolumeSpecName "kube-api-access-bj2jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.296632 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.323504 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.365983 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" (UID: "8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.369946 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-catalog-content\") pod \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370013 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4pvn\" (UniqueName: \"kubernetes.io/projected/3912910a-bd9b-4b5d-a67a-c6929de727b9-kube-api-access-p4pvn\") pod \"3912910a-bd9b-4b5d-a67a-c6929de727b9\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370042 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-utilities\") pod \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370067 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwb82\" (UniqueName: \"kubernetes.io/projected/51920ae4-b602-4113-b233-57fdef96cd52-kube-api-access-pwb82\") pod \"51920ae4-b602-4113-b233-57fdef96cd52\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370133 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca\") pod \"3912910a-bd9b-4b5d-a67a-c6929de727b9\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370154 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-catalog-content\") pod \"51920ae4-b602-4113-b233-57fdef96cd52\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370178 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics\") pod \"3912910a-bd9b-4b5d-a67a-c6929de727b9\" (UID: \"3912910a-bd9b-4b5d-a67a-c6929de727b9\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370212 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-utilities\") pod \"51920ae4-b602-4113-b233-57fdef96cd52\" (UID: \"51920ae4-b602-4113-b233-57fdef96cd52\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370273 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rw2v\" (UniqueName: \"kubernetes.io/projected/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-kube-api-access-5rw2v\") pod \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\" (UID: \"cd7f17b2-3180-41e3-a8cf-1f40338eadf0\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370544 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.370561 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj2jm\" (UniqueName: \"kubernetes.io/projected/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f-kube-api-access-bj2jm\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.371299 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-utilities" (OuterVolumeSpecName: "utilities") pod "cd7f17b2-3180-41e3-a8cf-1f40338eadf0" (UID: "cd7f17b2-3180-41e3-a8cf-1f40338eadf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.372536 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3912910a-bd9b-4b5d-a67a-c6929de727b9" (UID: "3912910a-bd9b-4b5d-a67a-c6929de727b9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.372961 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-utilities" (OuterVolumeSpecName: "utilities") pod "51920ae4-b602-4113-b233-57fdef96cd52" (UID: "51920ae4-b602-4113-b233-57fdef96cd52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.374799 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3912910a-bd9b-4b5d-a67a-c6929de727b9-kube-api-access-p4pvn" (OuterVolumeSpecName: "kube-api-access-p4pvn") pod "3912910a-bd9b-4b5d-a67a-c6929de727b9" (UID: "3912910a-bd9b-4b5d-a67a-c6929de727b9"). InnerVolumeSpecName "kube-api-access-p4pvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.377657 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3912910a-bd9b-4b5d-a67a-c6929de727b9" (UID: "3912910a-bd9b-4b5d-a67a-c6929de727b9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.379705 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51920ae4-b602-4113-b233-57fdef96cd52-kube-api-access-pwb82" (OuterVolumeSpecName: "kube-api-access-pwb82") pod "51920ae4-b602-4113-b233-57fdef96cd52" (UID: "51920ae4-b602-4113-b233-57fdef96cd52"). InnerVolumeSpecName "kube-api-access-pwb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.382312 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-kube-api-access-5rw2v" (OuterVolumeSpecName: "kube-api-access-5rw2v") pod "cd7f17b2-3180-41e3-a8cf-1f40338eadf0" (UID: "cd7f17b2-3180-41e3-a8cf-1f40338eadf0"). InnerVolumeSpecName "kube-api-access-5rw2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.403036 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51920ae4-b602-4113-b233-57fdef96cd52" (UID: "51920ae4-b602-4113-b233-57fdef96cd52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.440652 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd7f17b2-3180-41e3-a8cf-1f40338eadf0" (UID: "cd7f17b2-3180-41e3-a8cf-1f40338eadf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.460672 4624 generic.go:334] "Generic (PLEG): container finished" podID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerID="c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b" exitCode=0 Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.460765 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" event={"ID":"3912910a-bd9b-4b5d-a67a-c6929de727b9","Type":"ContainerDied","Data":"c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.460786 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.460798 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zmfld" event={"ID":"3912910a-bd9b-4b5d-a67a-c6929de727b9","Type":"ContainerDied","Data":"fe49544d46fadc77d90af17ee03c543326caf0b0e8e05d504b9282e40f9630d9"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.460844 4624 scope.go:117] "RemoveContainer" containerID="c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.467566 4624 generic.go:334] "Generic (PLEG): container finished" podID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerID="fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce" exitCode=0 Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.467634 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerDied","Data":"fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.467661 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjcv7" event={"ID":"cd7f17b2-3180-41e3-a8cf-1f40338eadf0","Type":"ContainerDied","Data":"f6b2431c7cf2b1341dc4c749320ab100876d40a2bede0a11d8ab4fa1333821e1"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.467734 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjcv7" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473100 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msnmv\" (UniqueName: \"kubernetes.io/projected/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-kube-api-access-msnmv\") pod \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473206 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-utilities\") pod \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473331 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-catalog-content\") pod \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\" (UID: \"69a0ae1a-bcd4-41f5-af2c-07aebcb45296\") " Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473831 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473858 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4pvn\" (UniqueName: \"kubernetes.io/projected/3912910a-bd9b-4b5d-a67a-c6929de727b9-kube-api-access-p4pvn\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473872 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473887 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwb82\" (UniqueName: \"kubernetes.io/projected/51920ae4-b602-4113-b233-57fdef96cd52-kube-api-access-pwb82\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473900 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473909 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473919 4624 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3912910a-bd9b-4b5d-a67a-c6929de727b9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473942 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51920ae4-b602-4113-b233-57fdef96cd52-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.473952 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rw2v\" (UniqueName: \"kubernetes.io/projected/cd7f17b2-3180-41e3-a8cf-1f40338eadf0-kube-api-access-5rw2v\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.478009 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-utilities" (OuterVolumeSpecName: "utilities") pod "69a0ae1a-bcd4-41f5-af2c-07aebcb45296" (UID: "69a0ae1a-bcd4-41f5-af2c-07aebcb45296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.484954 4624 scope.go:117] "RemoveContainer" containerID="063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.487758 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-kube-api-access-msnmv" (OuterVolumeSpecName: "kube-api-access-msnmv") pod "69a0ae1a-bcd4-41f5-af2c-07aebcb45296" (UID: "69a0ae1a-bcd4-41f5-af2c-07aebcb45296"). InnerVolumeSpecName "kube-api-access-msnmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.497270 4624 generic.go:334] "Generic (PLEG): container finished" podID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerID="c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c" exitCode=0 Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.497353 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n698" event={"ID":"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f","Type":"ContainerDied","Data":"c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.497383 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n698" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.497386 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n698" event={"ID":"8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f","Type":"ContainerDied","Data":"42783a68ac1de4ef621d239e5c6a2cb485afe4af4e981879e51fb240f327eea3"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.499106 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zmfld"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.505383 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zmfld"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.506676 4624 generic.go:334] "Generic (PLEG): container finished" podID="51920ae4-b602-4113-b233-57fdef96cd52" containerID="243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4" exitCode=0 Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.506756 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerDied","Data":"243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.506779 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2pjq" event={"ID":"51920ae4-b602-4113-b233-57fdef96cd52","Type":"ContainerDied","Data":"7625a9f9d3ea6f21786b3fa7838aa3ff99737d63733c90be636f660c7f60ad34"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.506859 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2pjq" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.513003 4624 generic.go:334] "Generic (PLEG): container finished" podID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerID="cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2" exitCode=0 Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.513061 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerDied","Data":"cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.513130 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x54xc" event={"ID":"69a0ae1a-bcd4-41f5-af2c-07aebcb45296","Type":"ContainerDied","Data":"b6019718072c07f346bf0c94b753e8ec9f7357599ff29647810abef73960a5b8"} Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.513229 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x54xc" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.529817 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjcv7"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.539462 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jjcv7"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.539645 4624 scope.go:117] "RemoveContainer" containerID="c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.543221 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n698"] Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.546217 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b\": container with ID starting with c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b not found: ID does not exist" containerID="c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.546329 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b"} err="failed to get container status \"c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b\": rpc error: code = NotFound desc = could not find container \"c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b\": container with ID starting with c544a16aa70f24738932523ba2b303b07cd76003b3f6e28d6972a09f2b4dfd3b not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.546396 4624 scope.go:117] "RemoveContainer" containerID="063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.546644 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2n698"] Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.548243 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f\": container with ID starting with 063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f not found: ID does not exist" containerID="063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.548391 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f"} err="failed to get container status \"063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f\": rpc error: code = NotFound desc = could not find container \"063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f\": container with ID starting with 063c8695eafc4ab09291479caddfa8e85e5325cd988d8cb620957db06163943f not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.548501 4624 scope.go:117] "RemoveContainer" containerID="fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.566380 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2pjq"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.575766 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msnmv\" (UniqueName: \"kubernetes.io/projected/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-kube-api-access-msnmv\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.575795 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.575811 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2pjq"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.588806 4624 scope.go:117] "RemoveContainer" containerID="f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.605765 4624 scope.go:117] "RemoveContainer" containerID="7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.629757 4624 scope.go:117] "RemoveContainer" containerID="fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.634665 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce\": container with ID starting with fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce not found: ID does not exist" containerID="fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.634705 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce"} err="failed to get container status \"fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce\": rpc error: code = NotFound desc = could not find container \"fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce\": container with ID starting with fc7e29664e95b5d343f8401ea6bf686555c4c6121a69693862306cc7d8ae93ce not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.634731 4624 scope.go:117] "RemoveContainer" containerID="f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.635317 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef\": container with ID starting with f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef not found: ID does not exist" containerID="f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.635367 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef"} err="failed to get container status \"f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef\": rpc error: code = NotFound desc = could not find container \"f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef\": container with ID starting with f6b767fe51b108726439e8918a15103c774ed2bc7285a9c688db40586a3fcdef not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.635407 4624 scope.go:117] "RemoveContainer" containerID="7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.635820 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76\": container with ID starting with 7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76 not found: ID does not exist" containerID="7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.635857 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76"} err="failed to get container status \"7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76\": rpc error: code = NotFound desc = could not find container \"7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76\": container with ID starting with 7ba9c07e95fa4a427a8e5395527cd6b0ad5e72ddb1f97dbfea58d0e015736f76 not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.635885 4624 scope.go:117] "RemoveContainer" containerID="c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.653982 4624 scope.go:117] "RemoveContainer" containerID="aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.668688 4624 scope.go:117] "RemoveContainer" containerID="7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.679265 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69a0ae1a-bcd4-41f5-af2c-07aebcb45296" (UID: "69a0ae1a-bcd4-41f5-af2c-07aebcb45296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.689562 4624 scope.go:117] "RemoveContainer" containerID="c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.691013 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c\": container with ID starting with c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c not found: ID does not exist" containerID="c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.691149 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c"} err="failed to get container status \"c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c\": rpc error: code = NotFound desc = could not find container \"c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c\": container with ID starting with c3e36847dd26d882ebdc56be54fbd406f6dea461cee6ca3da7daceb91035e98c not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.691237 4624 scope.go:117] "RemoveContainer" containerID="aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.691724 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7\": container with ID starting with aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7 not found: ID does not exist" containerID="aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.691752 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7"} err="failed to get container status \"aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7\": rpc error: code = NotFound desc = could not find container \"aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7\": container with ID starting with aa0ef59730a9aec790a23926fa4cc82666d81c3ccc7a05a969e44493335db6b7 not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.691769 4624 scope.go:117] "RemoveContainer" containerID="7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.692218 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e\": container with ID starting with 7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e not found: ID does not exist" containerID="7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.692291 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e"} err="failed to get container status \"7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e\": rpc error: code = NotFound desc = could not find container \"7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e\": container with ID starting with 7a26c1a67f1823df0f4e5bf88e8a3ccadb0e721f1db0d17361aa7e1e78d0176e not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.692335 4624 scope.go:117] "RemoveContainer" containerID="243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.705356 4624 scope.go:117] "RemoveContainer" containerID="9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.723741 4624 scope.go:117] "RemoveContainer" containerID="5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.737140 4624 scope.go:117] "RemoveContainer" containerID="243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.737569 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4\": container with ID starting with 243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4 not found: ID does not exist" containerID="243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.737635 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4"} err="failed to get container status \"243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4\": rpc error: code = NotFound desc = could not find container \"243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4\": container with ID starting with 243b5cb30ed055f65833c133d09a7e4726538792fc5ef5a058bad6f32916fbf4 not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.737675 4624 scope.go:117] "RemoveContainer" containerID="9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.738037 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5\": container with ID starting with 9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5 not found: ID does not exist" containerID="9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.738094 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5"} err="failed to get container status \"9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5\": rpc error: code = NotFound desc = could not find container \"9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5\": container with ID starting with 9df2c745c4fe83c6429f407b4ebaf2ad204555307e91adf20843eb9b103f0ee5 not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.738113 4624 scope.go:117] "RemoveContainer" containerID="5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.738382 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9\": container with ID starting with 5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9 not found: ID does not exist" containerID="5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.738423 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9"} err="failed to get container status \"5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9\": rpc error: code = NotFound desc = could not find container \"5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9\": container with ID starting with 5cd563696d6f4be236afe9d93efb4810037b265e367f560dc35c7a479b477ab9 not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.738445 4624 scope.go:117] "RemoveContainer" containerID="cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.753429 4624 scope.go:117] "RemoveContainer" containerID="4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.768860 4624 scope.go:117] "RemoveContainer" containerID="2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.779361 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a0ae1a-bcd4-41f5-af2c-07aebcb45296-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.784531 4624 scope.go:117] "RemoveContainer" containerID="cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.785306 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2\": container with ID starting with cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2 not found: ID does not exist" containerID="cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.785436 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2"} err="failed to get container status \"cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2\": rpc error: code = NotFound desc = could not find container \"cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2\": container with ID starting with cbdd9d9d4f30e8a93076734fbaa072517734bfc701f121f61e1716047c9bdbe2 not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.785766 4624 scope.go:117] "RemoveContainer" containerID="4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.786805 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec\": container with ID starting with 4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec not found: ID does not exist" containerID="4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.786838 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec"} err="failed to get container status \"4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec\": rpc error: code = NotFound desc = could not find container \"4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec\": container with ID starting with 4cf3758f8f3a5aca201ae7d50b788cec213194e1f1aed4791d1156679ce05aec not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.786871 4624 scope.go:117] "RemoveContainer" containerID="2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a" Feb 28 03:41:29 crc kubenswrapper[4624]: E0228 03:41:29.787269 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a\": container with ID starting with 2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a not found: ID does not exist" containerID="2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.787312 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a"} err="failed to get container status \"2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a\": rpc error: code = NotFound desc = could not find container \"2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a\": container with ID starting with 2949eb3c4e4a15ba6d3f6af572f336ce79649985b04b9ce676a70abe6a96436a not found: ID does not exist" Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.841071 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m6bvr"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.900625 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x54xc"] Feb 28 03:41:29 crc kubenswrapper[4624]: I0228 03:41:29.905710 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x54xc"] Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.093408 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" path="/var/lib/kubelet/pods/3912910a-bd9b-4b5d-a67a-c6929de727b9/volumes" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.094489 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51920ae4-b602-4113-b233-57fdef96cd52" path="/var/lib/kubelet/pods/51920ae4-b602-4113-b233-57fdef96cd52/volumes" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.095052 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" path="/var/lib/kubelet/pods/69a0ae1a-bcd4-41f5-af2c-07aebcb45296/volumes" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.096143 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" path="/var/lib/kubelet/pods/8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f/volumes" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.096709 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" path="/var/lib/kubelet/pods/cd7f17b2-3180-41e3-a8cf-1f40338eadf0/volumes" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.301705 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7796"] Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302060 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302106 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302132 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302140 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302155 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302166 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302182 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302192 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302206 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302216 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302236 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302245 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302258 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302267 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302279 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302288 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302301 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302309 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302323 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302330 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302340 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302348 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302356 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302364 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="extract-utilities" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302378 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302386 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="extract-content" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302516 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a0ae1a-bcd4-41f5-af2c-07aebcb45296" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302529 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302543 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7f17b2-3180-41e3-a8cf-1f40338eadf0" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302553 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="51920ae4-b602-4113-b233-57fdef96cd52" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302564 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1b447a-087a-41f0-8bb8-7fd4b6cd3e2f" containerName="registry-server" Feb 28 03:41:30 crc kubenswrapper[4624]: E0228 03:41:30.302677 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302684 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.302807 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3912910a-bd9b-4b5d-a67a-c6929de727b9" containerName="marketplace-operator" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.303645 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.307837 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.324192 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7796"] Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.491938 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8005398-5d8f-4adc-ae71-c01babe23241-catalog-content\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.492419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz54t\" (UniqueName: \"kubernetes.io/projected/a8005398-5d8f-4adc-ae71-c01babe23241-kube-api-access-xz54t\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.492526 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8005398-5d8f-4adc-ae71-c01babe23241-utilities\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.530195 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" event={"ID":"08a27942-dc8c-4905-b3d3-7202aae79787","Type":"ContainerStarted","Data":"e807e461423ee3c3233890d69868732c4ac0b961d8ef2469133085e9b26f2831"} Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.530692 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" event={"ID":"08a27942-dc8c-4905-b3d3-7202aae79787","Type":"ContainerStarted","Data":"3ffcc0783d8ae3e80f5ec13e0f4862b1e0c27fb9442f9f84811ae16e039c284d"} Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.530807 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.533020 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.555521 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m6bvr" podStartSLOduration=2.555492065 podStartE2EDuration="2.555492065s" podCreationTimestamp="2026-02-28 03:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:41:30.551742375 +0000 UTC m=+345.215781694" watchObservedRunningTime="2026-02-28 03:41:30.555492065 +0000 UTC m=+345.219531374" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.593749 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8005398-5d8f-4adc-ae71-c01babe23241-catalog-content\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.593826 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz54t\" (UniqueName: \"kubernetes.io/projected/a8005398-5d8f-4adc-ae71-c01babe23241-kube-api-access-xz54t\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.593871 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8005398-5d8f-4adc-ae71-c01babe23241-utilities\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.594335 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8005398-5d8f-4adc-ae71-c01babe23241-utilities\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.594557 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8005398-5d8f-4adc-ae71-c01babe23241-catalog-content\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.613173 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz54t\" (UniqueName: \"kubernetes.io/projected/a8005398-5d8f-4adc-ae71-c01babe23241-kube-api-access-xz54t\") pod \"certified-operators-d7796\" (UID: \"a8005398-5d8f-4adc-ae71-c01babe23241\") " pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:30 crc kubenswrapper[4624]: I0228 03:41:30.620560 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.028481 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7796"] Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.543646 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8005398-5d8f-4adc-ae71-c01babe23241" containerID="a09da5aac2ae0bc3dca735afeccbef84510473cc20ff43c048e762233fe93f94" exitCode=0 Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.543770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7796" event={"ID":"a8005398-5d8f-4adc-ae71-c01babe23241","Type":"ContainerDied","Data":"a09da5aac2ae0bc3dca735afeccbef84510473cc20ff43c048e762233fe93f94"} Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.544322 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7796" event={"ID":"a8005398-5d8f-4adc-ae71-c01babe23241","Type":"ContainerStarted","Data":"b2092158941c5381b156e63de6e00477f00c76886e55581e95f9f1f4c3eb0cd5"} Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.700893 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnmfd"] Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.702024 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.704595 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.722450 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnmfd"] Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.814458 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78f5\" (UniqueName: \"kubernetes.io/projected/e222997d-b739-4944-90b6-ad421288f50a-kube-api-access-g78f5\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.814511 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e222997d-b739-4944-90b6-ad421288f50a-utilities\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.814543 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e222997d-b739-4944-90b6-ad421288f50a-catalog-content\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.916626 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78f5\" (UniqueName: \"kubernetes.io/projected/e222997d-b739-4944-90b6-ad421288f50a-kube-api-access-g78f5\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.916726 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e222997d-b739-4944-90b6-ad421288f50a-utilities\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.916830 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e222997d-b739-4944-90b6-ad421288f50a-catalog-content\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.917613 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e222997d-b739-4944-90b6-ad421288f50a-catalog-content\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.917753 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e222997d-b739-4944-90b6-ad421288f50a-utilities\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:31 crc kubenswrapper[4624]: I0228 03:41:31.950807 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78f5\" (UniqueName: \"kubernetes.io/projected/e222997d-b739-4944-90b6-ad421288f50a-kube-api-access-g78f5\") pod \"redhat-marketplace-xnmfd\" (UID: \"e222997d-b739-4944-90b6-ad421288f50a\") " pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.023641 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.514929 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnmfd"] Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.552408 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7796" event={"ID":"a8005398-5d8f-4adc-ae71-c01babe23241","Type":"ContainerStarted","Data":"7808e867a4548da797c7dfcfb5d76081b0be5c8f01c142816dab208638599b43"} Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.561260 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnmfd" event={"ID":"e222997d-b739-4944-90b6-ad421288f50a","Type":"ContainerStarted","Data":"375833ca82b598306a56345e5f2839a7946cc050d47edcf37f77e07a88a4e93d"} Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.700289 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mm8x2"] Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.704565 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.706917 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.713267 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm8x2"] Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.830077 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ef4d32-8412-48d6-b08f-7230cd574d66-utilities\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.830463 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ef4d32-8412-48d6-b08f-7230cd574d66-catalog-content\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.830641 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng889\" (UniqueName: \"kubernetes.io/projected/e9ef4d32-8412-48d6-b08f-7230cd574d66-kube-api-access-ng889\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.931853 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ef4d32-8412-48d6-b08f-7230cd574d66-utilities\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.932215 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ef4d32-8412-48d6-b08f-7230cd574d66-catalog-content\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.932343 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng889\" (UniqueName: \"kubernetes.io/projected/e9ef4d32-8412-48d6-b08f-7230cd574d66-kube-api-access-ng889\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.932506 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9ef4d32-8412-48d6-b08f-7230cd574d66-utilities\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.932683 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9ef4d32-8412-48d6-b08f-7230cd574d66-catalog-content\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:32 crc kubenswrapper[4624]: I0228 03:41:32.951666 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng889\" (UniqueName: \"kubernetes.io/projected/e9ef4d32-8412-48d6-b08f-7230cd574d66-kube-api-access-ng889\") pod \"redhat-operators-mm8x2\" (UID: \"e9ef4d32-8412-48d6-b08f-7230cd574d66\") " pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.063552 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.311458 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm8x2"] Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.569430 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8005398-5d8f-4adc-ae71-c01babe23241" containerID="7808e867a4548da797c7dfcfb5d76081b0be5c8f01c142816dab208638599b43" exitCode=0 Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.569519 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7796" event={"ID":"a8005398-5d8f-4adc-ae71-c01babe23241","Type":"ContainerDied","Data":"7808e867a4548da797c7dfcfb5d76081b0be5c8f01c142816dab208638599b43"} Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.571730 4624 generic.go:334] "Generic (PLEG): container finished" podID="e9ef4d32-8412-48d6-b08f-7230cd574d66" containerID="2930878a22dc851318a76a97fb5a9c18902723ba652baa5d8dea754e8ce8f068" exitCode=0 Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.571794 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm8x2" event={"ID":"e9ef4d32-8412-48d6-b08f-7230cd574d66","Type":"ContainerDied","Data":"2930878a22dc851318a76a97fb5a9c18902723ba652baa5d8dea754e8ce8f068"} Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.571818 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm8x2" event={"ID":"e9ef4d32-8412-48d6-b08f-7230cd574d66","Type":"ContainerStarted","Data":"283a9d053308b079d0d17e5b99d128976f389bba3c7c109c25f33f3265ece14b"} Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.577702 4624 generic.go:334] "Generic (PLEG): container finished" podID="e222997d-b739-4944-90b6-ad421288f50a" containerID="926479cd6336b7a91629a687778d69133390e99947525815235b65ea636e4a79" exitCode=0 Feb 28 03:41:33 crc kubenswrapper[4624]: I0228 03:41:33.577764 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnmfd" event={"ID":"e222997d-b739-4944-90b6-ad421288f50a","Type":"ContainerDied","Data":"926479cd6336b7a91629a687778d69133390e99947525815235b65ea636e4a79"} Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.305447 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qq4nj"] Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.307683 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.320029 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qq4nj"] Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.321197 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.459166 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mbh\" (UniqueName: \"kubernetes.io/projected/d8226194-dd4d-461d-854a-131191db31f4-kube-api-access-88mbh\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.459243 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8226194-dd4d-461d-854a-131191db31f4-catalog-content\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.459314 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8226194-dd4d-461d-854a-131191db31f4-utilities\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.560325 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8226194-dd4d-461d-854a-131191db31f4-catalog-content\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.560444 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8226194-dd4d-461d-854a-131191db31f4-utilities\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.560478 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mbh\" (UniqueName: \"kubernetes.io/projected/d8226194-dd4d-461d-854a-131191db31f4-kube-api-access-88mbh\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.561382 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8226194-dd4d-461d-854a-131191db31f4-catalog-content\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.561453 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8226194-dd4d-461d-854a-131191db31f4-utilities\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.581859 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mbh\" (UniqueName: \"kubernetes.io/projected/d8226194-dd4d-461d-854a-131191db31f4-kube-api-access-88mbh\") pod \"community-operators-qq4nj\" (UID: \"d8226194-dd4d-461d-854a-131191db31f4\") " pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.597435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7796" event={"ID":"a8005398-5d8f-4adc-ae71-c01babe23241","Type":"ContainerStarted","Data":"df00f4c30bb50fe3944a8bded38c9322fba9f7eeb927fe60e663d163ea39a4f2"} Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.600287 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm8x2" event={"ID":"e9ef4d32-8412-48d6-b08f-7230cd574d66","Type":"ContainerStarted","Data":"3bfe858a8b7bc591be1b3b09027f1d5cf00270be40731ed44dc989a6b224a9f9"} Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.603527 4624 generic.go:334] "Generic (PLEG): container finished" podID="e222997d-b739-4944-90b6-ad421288f50a" containerID="1328c38af062596ec9ba79a2caeac60854b48dc9fa25d518440d9f4131117587" exitCode=0 Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.603576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnmfd" event={"ID":"e222997d-b739-4944-90b6-ad421288f50a","Type":"ContainerDied","Data":"1328c38af062596ec9ba79a2caeac60854b48dc9fa25d518440d9f4131117587"} Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.621866 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7796" podStartSLOduration=2.199065782 podStartE2EDuration="4.621844173s" podCreationTimestamp="2026-02-28 03:41:30 +0000 UTC" firstStartedPulling="2026-02-28 03:41:31.546571178 +0000 UTC m=+346.210610487" lastFinishedPulling="2026-02-28 03:41:33.969349559 +0000 UTC m=+348.633388878" observedRunningTime="2026-02-28 03:41:34.620024703 +0000 UTC m=+349.284064022" watchObservedRunningTime="2026-02-28 03:41:34.621844173 +0000 UTC m=+349.285883482" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.629482 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:34 crc kubenswrapper[4624]: I0228 03:41:34.913997 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qq4nj"] Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.611269 4624 generic.go:334] "Generic (PLEG): container finished" podID="e9ef4d32-8412-48d6-b08f-7230cd574d66" containerID="3bfe858a8b7bc591be1b3b09027f1d5cf00270be40731ed44dc989a6b224a9f9" exitCode=0 Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.611351 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm8x2" event={"ID":"e9ef4d32-8412-48d6-b08f-7230cd574d66","Type":"ContainerDied","Data":"3bfe858a8b7bc591be1b3b09027f1d5cf00270be40731ed44dc989a6b224a9f9"} Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.616182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnmfd" event={"ID":"e222997d-b739-4944-90b6-ad421288f50a","Type":"ContainerStarted","Data":"bce59fc421213c7041c1927a9da1a32acb5d311ce461c975acfb6e1a389a47ff"} Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.618571 4624 generic.go:334] "Generic (PLEG): container finished" podID="d8226194-dd4d-461d-854a-131191db31f4" containerID="2c2f990cf81f3d7e9ef831191bcaa9945da9b267064169d05737facf3bc2b476" exitCode=0 Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.618644 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qq4nj" event={"ID":"d8226194-dd4d-461d-854a-131191db31f4","Type":"ContainerDied","Data":"2c2f990cf81f3d7e9ef831191bcaa9945da9b267064169d05737facf3bc2b476"} Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.618686 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qq4nj" event={"ID":"d8226194-dd4d-461d-854a-131191db31f4","Type":"ContainerStarted","Data":"a9ebf153e61aad79fc5b98c4b0bce4a78ebaa79948031708c54477c9bbf6289c"} Feb 28 03:41:35 crc kubenswrapper[4624]: I0228 03:41:35.687686 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnmfd" podStartSLOduration=3.145969776 podStartE2EDuration="4.687664071s" podCreationTimestamp="2026-02-28 03:41:31 +0000 UTC" firstStartedPulling="2026-02-28 03:41:33.579235983 +0000 UTC m=+348.243275292" lastFinishedPulling="2026-02-28 03:41:35.120930288 +0000 UTC m=+349.784969587" observedRunningTime="2026-02-28 03:41:35.685620946 +0000 UTC m=+350.349660255" watchObservedRunningTime="2026-02-28 03:41:35.687664071 +0000 UTC m=+350.351703380" Feb 28 03:41:36 crc kubenswrapper[4624]: I0228 03:41:36.626963 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm8x2" event={"ID":"e9ef4d32-8412-48d6-b08f-7230cd574d66","Type":"ContainerStarted","Data":"9f3cbe8057247f7a19bb49cdbc4c1387e7475f3aa33ce0c4ba170fd953215cd1"} Feb 28 03:41:36 crc kubenswrapper[4624]: I0228 03:41:36.629148 4624 generic.go:334] "Generic (PLEG): container finished" podID="d8226194-dd4d-461d-854a-131191db31f4" containerID="b91c3d85b05c5a2ddc500f823e38c866d4ea9a3ccef0ed2fa08a82f9329b89cb" exitCode=0 Feb 28 03:41:36 crc kubenswrapper[4624]: I0228 03:41:36.629224 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qq4nj" event={"ID":"d8226194-dd4d-461d-854a-131191db31f4","Type":"ContainerDied","Data":"b91c3d85b05c5a2ddc500f823e38c866d4ea9a3ccef0ed2fa08a82f9329b89cb"} Feb 28 03:41:36 crc kubenswrapper[4624]: I0228 03:41:36.656799 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mm8x2" podStartSLOduration=2.202096455 podStartE2EDuration="4.656775618s" podCreationTimestamp="2026-02-28 03:41:32 +0000 UTC" firstStartedPulling="2026-02-28 03:41:33.576853109 +0000 UTC m=+348.240892418" lastFinishedPulling="2026-02-28 03:41:36.031532272 +0000 UTC m=+350.695571581" observedRunningTime="2026-02-28 03:41:36.651012484 +0000 UTC m=+351.315051783" watchObservedRunningTime="2026-02-28 03:41:36.656775618 +0000 UTC m=+351.320814937" Feb 28 03:41:37 crc kubenswrapper[4624]: I0228 03:41:37.664720 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qq4nj" event={"ID":"d8226194-dd4d-461d-854a-131191db31f4","Type":"ContainerStarted","Data":"a6f1a95baab9d17acf459c3dc404dd83a5fadea67c4dc3af00d8f8a048495c2d"} Feb 28 03:41:37 crc kubenswrapper[4624]: I0228 03:41:37.695779 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qq4nj" podStartSLOduration=2.30559585 podStartE2EDuration="3.695745389s" podCreationTimestamp="2026-02-28 03:41:34 +0000 UTC" firstStartedPulling="2026-02-28 03:41:35.620019224 +0000 UTC m=+350.284058553" lastFinishedPulling="2026-02-28 03:41:37.010168783 +0000 UTC m=+351.674208092" observedRunningTime="2026-02-28 03:41:37.689354748 +0000 UTC m=+352.353394057" watchObservedRunningTime="2026-02-28 03:41:37.695745389 +0000 UTC m=+352.359784698" Feb 28 03:41:40 crc kubenswrapper[4624]: I0228 03:41:40.621380 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:40 crc kubenswrapper[4624]: I0228 03:41:40.621876 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:40 crc kubenswrapper[4624]: I0228 03:41:40.670658 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:40 crc kubenswrapper[4624]: I0228 03:41:40.736165 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7796" Feb 28 03:41:42 crc kubenswrapper[4624]: I0228 03:41:42.024646 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:42 crc kubenswrapper[4624]: I0228 03:41:42.026300 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:42 crc kubenswrapper[4624]: I0228 03:41:42.078851 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:42 crc kubenswrapper[4624]: I0228 03:41:42.747378 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnmfd" Feb 28 03:41:43 crc kubenswrapper[4624]: I0228 03:41:43.064202 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:43 crc kubenswrapper[4624]: I0228 03:41:43.064258 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:43 crc kubenswrapper[4624]: I0228 03:41:43.107407 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:43 crc kubenswrapper[4624]: I0228 03:41:43.741716 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mm8x2" Feb 28 03:41:44 crc kubenswrapper[4624]: I0228 03:41:44.630522 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:44 crc kubenswrapper[4624]: I0228 03:41:44.630618 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:44 crc kubenswrapper[4624]: I0228 03:41:44.679072 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:44 crc kubenswrapper[4624]: I0228 03:41:44.752205 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qq4nj" Feb 28 03:41:45 crc kubenswrapper[4624]: I0228 03:41:45.414345 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" podUID="5823705e-af27-4b37-98f8-f73d31f69e02" containerName="registry" containerID="cri-o://2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14" gracePeriod=30 Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.403372 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.592881 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-trusted-ca\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593364 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-registry-tls\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593602 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593639 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-bound-sa-token\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593681 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823705e-af27-4b37-98f8-f73d31f69e02-ca-trust-extracted\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593718 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823705e-af27-4b37-98f8-f73d31f69e02-installation-pull-secrets\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593770 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phc2t\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-kube-api-access-phc2t\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.593807 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-registry-certificates\") pod \"5823705e-af27-4b37-98f8-f73d31f69e02\" (UID: \"5823705e-af27-4b37-98f8-f73d31f69e02\") " Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.594015 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.603912 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.604318 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5823705e-af27-4b37-98f8-f73d31f69e02-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.617705 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5823705e-af27-4b37-98f8-f73d31f69e02-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.621071 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.634707 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-kube-api-access-phc2t" (OuterVolumeSpecName: "kube-api-access-phc2t") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "kube-api-access-phc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.637570 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.659230 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5823705e-af27-4b37-98f8-f73d31f69e02" (UID: "5823705e-af27-4b37-98f8-f73d31f69e02"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695125 4624 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5823705e-af27-4b37-98f8-f73d31f69e02-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695191 4624 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5823705e-af27-4b37-98f8-f73d31f69e02-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695206 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phc2t\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-kube-api-access-phc2t\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695216 4624 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695229 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5823705e-af27-4b37-98f8-f73d31f69e02-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695239 4624 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.695249 4624 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5823705e-af27-4b37-98f8-f73d31f69e02-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.725386 4624 generic.go:334] "Generic (PLEG): container finished" podID="5823705e-af27-4b37-98f8-f73d31f69e02" containerID="2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14" exitCode=0 Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.725438 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" event={"ID":"5823705e-af27-4b37-98f8-f73d31f69e02","Type":"ContainerDied","Data":"2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14"} Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.725472 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" event={"ID":"5823705e-af27-4b37-98f8-f73d31f69e02","Type":"ContainerDied","Data":"1042499339faec1765938add0c64281dfed899be18044873b1da8f6f0293d1ad"} Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.725499 4624 scope.go:117] "RemoveContainer" containerID="2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.725667 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzbdn" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.767368 4624 scope.go:117] "RemoveContainer" containerID="2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14" Feb 28 03:41:47 crc kubenswrapper[4624]: E0228 03:41:46.771248 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14\": container with ID starting with 2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14 not found: ID does not exist" containerID="2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.771299 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14"} err="failed to get container status \"2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14\": rpc error: code = NotFound desc = could not find container \"2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14\": container with ID starting with 2973ff8bf232ac7267778447f713d0d21f0042de09cf146d1b0731b2d6140c14 not found: ID does not exist" Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.782100 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzbdn"] Feb 28 03:41:47 crc kubenswrapper[4624]: I0228 03:41:46.787784 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzbdn"] Feb 28 03:41:48 crc kubenswrapper[4624]: I0228 03:41:48.094095 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5823705e-af27-4b37-98f8-f73d31f69e02" path="/var/lib/kubelet/pods/5823705e-af27-4b37-98f8-f73d31f69e02/volumes" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.138929 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537502-bptlf"] Feb 28 03:42:00 crc kubenswrapper[4624]: E0228 03:42:00.139957 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5823705e-af27-4b37-98f8-f73d31f69e02" containerName="registry" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.139970 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5823705e-af27-4b37-98f8-f73d31f69e02" containerName="registry" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.140154 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5823705e-af27-4b37-98f8-f73d31f69e02" containerName="registry" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.140580 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.144955 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.144965 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.145282 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.149821 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537502-bptlf"] Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.301492 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/9914aecc-5e57-40f7-886d-a4290bef8682-kube-api-access-gx5hn\") pod \"auto-csr-approver-29537502-bptlf\" (UID: \"9914aecc-5e57-40f7-886d-a4290bef8682\") " pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.402490 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/9914aecc-5e57-40f7-886d-a4290bef8682-kube-api-access-gx5hn\") pod \"auto-csr-approver-29537502-bptlf\" (UID: \"9914aecc-5e57-40f7-886d-a4290bef8682\") " pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.425666 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/9914aecc-5e57-40f7-886d-a4290bef8682-kube-api-access-gx5hn\") pod \"auto-csr-approver-29537502-bptlf\" (UID: \"9914aecc-5e57-40f7-886d-a4290bef8682\") " pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.457453 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.666970 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537502-bptlf"] Feb 28 03:42:00 crc kubenswrapper[4624]: I0228 03:42:00.817012 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537502-bptlf" event={"ID":"9914aecc-5e57-40f7-886d-a4290bef8682","Type":"ContainerStarted","Data":"8dbf5fe7e72f69e0c768222a401f32a7c172353811aeee5646e7debcdc3d2110"} Feb 28 03:42:02 crc kubenswrapper[4624]: I0228 03:42:02.831758 4624 generic.go:334] "Generic (PLEG): container finished" podID="9914aecc-5e57-40f7-886d-a4290bef8682" containerID="bc01818307416a9da39d01c76b0942060b043b316459d6b312c4083abe8ea234" exitCode=0 Feb 28 03:42:02 crc kubenswrapper[4624]: I0228 03:42:02.832020 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537502-bptlf" event={"ID":"9914aecc-5e57-40f7-886d-a4290bef8682","Type":"ContainerDied","Data":"bc01818307416a9da39d01c76b0942060b043b316459d6b312c4083abe8ea234"} Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.128904 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.277514 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/9914aecc-5e57-40f7-886d-a4290bef8682-kube-api-access-gx5hn\") pod \"9914aecc-5e57-40f7-886d-a4290bef8682\" (UID: \"9914aecc-5e57-40f7-886d-a4290bef8682\") " Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.291575 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9914aecc-5e57-40f7-886d-a4290bef8682-kube-api-access-gx5hn" (OuterVolumeSpecName: "kube-api-access-gx5hn") pod "9914aecc-5e57-40f7-886d-a4290bef8682" (UID: "9914aecc-5e57-40f7-886d-a4290bef8682"). InnerVolumeSpecName "kube-api-access-gx5hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.380035 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx5hn\" (UniqueName: \"kubernetes.io/projected/9914aecc-5e57-40f7-886d-a4290bef8682-kube-api-access-gx5hn\") on node \"crc\" DevicePath \"\"" Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.846939 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537502-bptlf" event={"ID":"9914aecc-5e57-40f7-886d-a4290bef8682","Type":"ContainerDied","Data":"8dbf5fe7e72f69e0c768222a401f32a7c172353811aeee5646e7debcdc3d2110"} Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.847442 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dbf5fe7e72f69e0c768222a401f32a7c172353811aeee5646e7debcdc3d2110" Feb 28 03:42:04 crc kubenswrapper[4624]: I0228 03:42:04.847016 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537502-bptlf" Feb 28 03:42:19 crc kubenswrapper[4624]: I0228 03:42:19.540781 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:42:19 crc kubenswrapper[4624]: I0228 03:42:19.541904 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:42:49 crc kubenswrapper[4624]: I0228 03:42:49.540177 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:42:49 crc kubenswrapper[4624]: I0228 03:42:49.541198 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:43:19 crc kubenswrapper[4624]: I0228 03:43:19.539995 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:43:19 crc kubenswrapper[4624]: I0228 03:43:19.540983 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:43:19 crc kubenswrapper[4624]: I0228 03:43:19.541062 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:43:19 crc kubenswrapper[4624]: I0228 03:43:19.542027 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e168d945aae1509ec586eba77f4d6730480448e3bb4c6899a07204dd97ea7324"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:43:19 crc kubenswrapper[4624]: I0228 03:43:19.542171 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://e168d945aae1509ec586eba77f4d6730480448e3bb4c6899a07204dd97ea7324" gracePeriod=600 Feb 28 03:43:20 crc kubenswrapper[4624]: I0228 03:43:20.908197 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="e168d945aae1509ec586eba77f4d6730480448e3bb4c6899a07204dd97ea7324" exitCode=0 Feb 28 03:43:20 crc kubenswrapper[4624]: I0228 03:43:20.908283 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"e168d945aae1509ec586eba77f4d6730480448e3bb4c6899a07204dd97ea7324"} Feb 28 03:43:20 crc kubenswrapper[4624]: I0228 03:43:20.908990 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"1dbd0294073b9fff5d98ecfa7b194254718a0891e28c241ce447d5bdb488d8e1"} Feb 28 03:43:20 crc kubenswrapper[4624]: I0228 03:43:20.909026 4624 scope.go:117] "RemoveContainer" containerID="f81c68ce9fbdb1375ca7ccda71b9de0984294c2270817c2678cc7c745e541748" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.162906 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537504-tbjkt"] Feb 28 03:44:00 crc kubenswrapper[4624]: E0228 03:44:00.164486 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9914aecc-5e57-40f7-886d-a4290bef8682" containerName="oc" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.164521 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9914aecc-5e57-40f7-886d-a4290bef8682" containerName="oc" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.164810 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9914aecc-5e57-40f7-886d-a4290bef8682" containerName="oc" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.165793 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.170548 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.171958 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537504-tbjkt"] Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.172893 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.176967 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.282593 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc4fq\" (UniqueName: \"kubernetes.io/projected/cc500688-a991-467a-8f21-bb969392f09b-kube-api-access-kc4fq\") pod \"auto-csr-approver-29537504-tbjkt\" (UID: \"cc500688-a991-467a-8f21-bb969392f09b\") " pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.384347 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc4fq\" (UniqueName: \"kubernetes.io/projected/cc500688-a991-467a-8f21-bb969392f09b-kube-api-access-kc4fq\") pod \"auto-csr-approver-29537504-tbjkt\" (UID: \"cc500688-a991-467a-8f21-bb969392f09b\") " pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.431391 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc4fq\" (UniqueName: \"kubernetes.io/projected/cc500688-a991-467a-8f21-bb969392f09b-kube-api-access-kc4fq\") pod \"auto-csr-approver-29537504-tbjkt\" (UID: \"cc500688-a991-467a-8f21-bb969392f09b\") " pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.497830 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.952182 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537504-tbjkt"] Feb 28 03:44:00 crc kubenswrapper[4624]: I0228 03:44:00.975698 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 03:44:01 crc kubenswrapper[4624]: I0228 03:44:01.227658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" event={"ID":"cc500688-a991-467a-8f21-bb969392f09b","Type":"ContainerStarted","Data":"2a1b42443be035058d314cbe8864134a4ed80289337d1929a1476e8cb95aa08d"} Feb 28 03:44:03 crc kubenswrapper[4624]: I0228 03:44:03.247319 4624 generic.go:334] "Generic (PLEG): container finished" podID="cc500688-a991-467a-8f21-bb969392f09b" containerID="bc3a3e29cb1a37d6d1bff38a6837a6b22b52144cd5908e9c33d3f9c67a322598" exitCode=0 Feb 28 03:44:03 crc kubenswrapper[4624]: I0228 03:44:03.247485 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" event={"ID":"cc500688-a991-467a-8f21-bb969392f09b","Type":"ContainerDied","Data":"bc3a3e29cb1a37d6d1bff38a6837a6b22b52144cd5908e9c33d3f9c67a322598"} Feb 28 03:44:04 crc kubenswrapper[4624]: I0228 03:44:04.572436 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:04 crc kubenswrapper[4624]: I0228 03:44:04.663752 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc4fq\" (UniqueName: \"kubernetes.io/projected/cc500688-a991-467a-8f21-bb969392f09b-kube-api-access-kc4fq\") pod \"cc500688-a991-467a-8f21-bb969392f09b\" (UID: \"cc500688-a991-467a-8f21-bb969392f09b\") " Feb 28 03:44:04 crc kubenswrapper[4624]: I0228 03:44:04.678213 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc500688-a991-467a-8f21-bb969392f09b-kube-api-access-kc4fq" (OuterVolumeSpecName: "kube-api-access-kc4fq") pod "cc500688-a991-467a-8f21-bb969392f09b" (UID: "cc500688-a991-467a-8f21-bb969392f09b"). InnerVolumeSpecName "kube-api-access-kc4fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:44:04 crc kubenswrapper[4624]: I0228 03:44:04.767367 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc4fq\" (UniqueName: \"kubernetes.io/projected/cc500688-a991-467a-8f21-bb969392f09b-kube-api-access-kc4fq\") on node \"crc\" DevicePath \"\"" Feb 28 03:44:05 crc kubenswrapper[4624]: I0228 03:44:05.264962 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" event={"ID":"cc500688-a991-467a-8f21-bb969392f09b","Type":"ContainerDied","Data":"2a1b42443be035058d314cbe8864134a4ed80289337d1929a1476e8cb95aa08d"} Feb 28 03:44:05 crc kubenswrapper[4624]: I0228 03:44:05.265031 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a1b42443be035058d314cbe8864134a4ed80289337d1929a1476e8cb95aa08d" Feb 28 03:44:05 crc kubenswrapper[4624]: I0228 03:44:05.265163 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537504-tbjkt" Feb 28 03:44:05 crc kubenswrapper[4624]: I0228 03:44:05.665429 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537498-x9wgz"] Feb 28 03:44:05 crc kubenswrapper[4624]: I0228 03:44:05.673383 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537498-x9wgz"] Feb 28 03:44:06 crc kubenswrapper[4624]: I0228 03:44:06.100939 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70fd0c80-14b5-4af4-bc5a-7ca64460bc65" path="/var/lib/kubelet/pods/70fd0c80-14b5-4af4-bc5a-7ca64460bc65/volumes" Feb 28 03:44:46 crc kubenswrapper[4624]: I0228 03:44:46.701664 4624 scope.go:117] "RemoveContainer" containerID="3101d6ac8b3373b027116bbb16248a0a75ca938bf16ce168335013f09725cc05" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.156970 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g"] Feb 28 03:45:00 crc kubenswrapper[4624]: E0228 03:45:00.157922 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc500688-a991-467a-8f21-bb969392f09b" containerName="oc" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.157942 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc500688-a991-467a-8f21-bb969392f09b" containerName="oc" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.158243 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc500688-a991-467a-8f21-bb969392f09b" containerName="oc" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.159034 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.161065 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9384484-a89a-487d-9cc3-327226cc1847-secret-volume\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.161124 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9384484-a89a-487d-9cc3-327226cc1847-config-volume\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.161209 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7shcp\" (UniqueName: \"kubernetes.io/projected/c9384484-a89a-487d-9cc3-327226cc1847-kube-api-access-7shcp\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.163910 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.171438 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g"] Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.173329 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.262368 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9384484-a89a-487d-9cc3-327226cc1847-secret-volume\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.262432 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9384484-a89a-487d-9cc3-327226cc1847-config-volume\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.262554 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7shcp\" (UniqueName: \"kubernetes.io/projected/c9384484-a89a-487d-9cc3-327226cc1847-kube-api-access-7shcp\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.263800 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9384484-a89a-487d-9cc3-327226cc1847-config-volume\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.270466 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9384484-a89a-487d-9cc3-327226cc1847-secret-volume\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.283165 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7shcp\" (UniqueName: \"kubernetes.io/projected/c9384484-a89a-487d-9cc3-327226cc1847-kube-api-access-7shcp\") pod \"collect-profiles-29537505-lrm8g\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.515446 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:00 crc kubenswrapper[4624]: I0228 03:45:00.774425 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g"] Feb 28 03:45:01 crc kubenswrapper[4624]: I0228 03:45:01.733537 4624 generic.go:334] "Generic (PLEG): container finished" podID="c9384484-a89a-487d-9cc3-327226cc1847" containerID="bab4fd0b0c41cad86b5a833601d69e979905ca0903ecd9a9b312391960ea1748" exitCode=0 Feb 28 03:45:01 crc kubenswrapper[4624]: I0228 03:45:01.733631 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" event={"ID":"c9384484-a89a-487d-9cc3-327226cc1847","Type":"ContainerDied","Data":"bab4fd0b0c41cad86b5a833601d69e979905ca0903ecd9a9b312391960ea1748"} Feb 28 03:45:01 crc kubenswrapper[4624]: I0228 03:45:01.733957 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" event={"ID":"c9384484-a89a-487d-9cc3-327226cc1847","Type":"ContainerStarted","Data":"92049864b80ca531846ead0992f224e005280354c911e093b16551e11da72f39"} Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.097468 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.126312 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9384484-a89a-487d-9cc3-327226cc1847-secret-volume\") pod \"c9384484-a89a-487d-9cc3-327226cc1847\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.126433 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7shcp\" (UniqueName: \"kubernetes.io/projected/c9384484-a89a-487d-9cc3-327226cc1847-kube-api-access-7shcp\") pod \"c9384484-a89a-487d-9cc3-327226cc1847\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.126514 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9384484-a89a-487d-9cc3-327226cc1847-config-volume\") pod \"c9384484-a89a-487d-9cc3-327226cc1847\" (UID: \"c9384484-a89a-487d-9cc3-327226cc1847\") " Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.127948 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9384484-a89a-487d-9cc3-327226cc1847-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9384484-a89a-487d-9cc3-327226cc1847" (UID: "c9384484-a89a-487d-9cc3-327226cc1847"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.128657 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9384484-a89a-487d-9cc3-327226cc1847-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.135557 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9384484-a89a-487d-9cc3-327226cc1847-kube-api-access-7shcp" (OuterVolumeSpecName: "kube-api-access-7shcp") pod "c9384484-a89a-487d-9cc3-327226cc1847" (UID: "c9384484-a89a-487d-9cc3-327226cc1847"). InnerVolumeSpecName "kube-api-access-7shcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.140323 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9384484-a89a-487d-9cc3-327226cc1847-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9384484-a89a-487d-9cc3-327226cc1847" (UID: "c9384484-a89a-487d-9cc3-327226cc1847"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.229601 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9384484-a89a-487d-9cc3-327226cc1847-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.229642 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7shcp\" (UniqueName: \"kubernetes.io/projected/c9384484-a89a-487d-9cc3-327226cc1847-kube-api-access-7shcp\") on node \"crc\" DevicePath \"\"" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.753575 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" event={"ID":"c9384484-a89a-487d-9cc3-327226cc1847","Type":"ContainerDied","Data":"92049864b80ca531846ead0992f224e005280354c911e093b16551e11da72f39"} Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.753626 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g" Feb 28 03:45:03 crc kubenswrapper[4624]: I0228 03:45:03.753648 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92049864b80ca531846ead0992f224e005280354c911e093b16551e11da72f39" Feb 28 03:45:19 crc kubenswrapper[4624]: I0228 03:45:19.540655 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:45:19 crc kubenswrapper[4624]: I0228 03:45:19.541377 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:45:49 crc kubenswrapper[4624]: I0228 03:45:49.540940 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:45:49 crc kubenswrapper[4624]: I0228 03:45:49.541984 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.143181 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537506-t48b7"] Feb 28 03:46:00 crc kubenswrapper[4624]: E0228 03:46:00.144101 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9384484-a89a-487d-9cc3-327226cc1847" containerName="collect-profiles" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.144117 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9384484-a89a-487d-9cc3-327226cc1847" containerName="collect-profiles" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.144241 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9384484-a89a-487d-9cc3-327226cc1847" containerName="collect-profiles" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.144741 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.149274 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.150441 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.151192 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537506-t48b7"] Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.154709 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.331392 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8d8q\" (UniqueName: \"kubernetes.io/projected/e032ea25-286f-4ab4-93dc-2f1aefee2245-kube-api-access-q8d8q\") pod \"auto-csr-approver-29537506-t48b7\" (UID: \"e032ea25-286f-4ab4-93dc-2f1aefee2245\") " pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.433071 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8d8q\" (UniqueName: \"kubernetes.io/projected/e032ea25-286f-4ab4-93dc-2f1aefee2245-kube-api-access-q8d8q\") pod \"auto-csr-approver-29537506-t48b7\" (UID: \"e032ea25-286f-4ab4-93dc-2f1aefee2245\") " pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.471913 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8d8q\" (UniqueName: \"kubernetes.io/projected/e032ea25-286f-4ab4-93dc-2f1aefee2245-kube-api-access-q8d8q\") pod \"auto-csr-approver-29537506-t48b7\" (UID: \"e032ea25-286f-4ab4-93dc-2f1aefee2245\") " pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.481364 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:00 crc kubenswrapper[4624]: I0228 03:46:00.760552 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537506-t48b7"] Feb 28 03:46:01 crc kubenswrapper[4624]: I0228 03:46:01.184883 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537506-t48b7" event={"ID":"e032ea25-286f-4ab4-93dc-2f1aefee2245","Type":"ContainerStarted","Data":"5ab994ecbd1235d466ef14db1ec1cf21c7f8b3b3cf2af6db55e798bb51527d00"} Feb 28 03:46:02 crc kubenswrapper[4624]: I0228 03:46:02.193340 4624 generic.go:334] "Generic (PLEG): container finished" podID="e032ea25-286f-4ab4-93dc-2f1aefee2245" containerID="5f1faf0c35e070e364ff9019d56b40f992e90347a4e439f9c1a80f2036fe03ec" exitCode=0 Feb 28 03:46:02 crc kubenswrapper[4624]: I0228 03:46:02.193421 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537506-t48b7" event={"ID":"e032ea25-286f-4ab4-93dc-2f1aefee2245","Type":"ContainerDied","Data":"5f1faf0c35e070e364ff9019d56b40f992e90347a4e439f9c1a80f2036fe03ec"} Feb 28 03:46:03 crc kubenswrapper[4624]: I0228 03:46:03.537537 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:03 crc kubenswrapper[4624]: I0228 03:46:03.684038 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8d8q\" (UniqueName: \"kubernetes.io/projected/e032ea25-286f-4ab4-93dc-2f1aefee2245-kube-api-access-q8d8q\") pod \"e032ea25-286f-4ab4-93dc-2f1aefee2245\" (UID: \"e032ea25-286f-4ab4-93dc-2f1aefee2245\") " Feb 28 03:46:03 crc kubenswrapper[4624]: I0228 03:46:03.696199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e032ea25-286f-4ab4-93dc-2f1aefee2245-kube-api-access-q8d8q" (OuterVolumeSpecName: "kube-api-access-q8d8q") pod "e032ea25-286f-4ab4-93dc-2f1aefee2245" (UID: "e032ea25-286f-4ab4-93dc-2f1aefee2245"). InnerVolumeSpecName "kube-api-access-q8d8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:46:03 crc kubenswrapper[4624]: I0228 03:46:03.786455 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8d8q\" (UniqueName: \"kubernetes.io/projected/e032ea25-286f-4ab4-93dc-2f1aefee2245-kube-api-access-q8d8q\") on node \"crc\" DevicePath \"\"" Feb 28 03:46:04 crc kubenswrapper[4624]: I0228 03:46:04.211752 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537506-t48b7" event={"ID":"e032ea25-286f-4ab4-93dc-2f1aefee2245","Type":"ContainerDied","Data":"5ab994ecbd1235d466ef14db1ec1cf21c7f8b3b3cf2af6db55e798bb51527d00"} Feb 28 03:46:04 crc kubenswrapper[4624]: I0228 03:46:04.211816 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab994ecbd1235d466ef14db1ec1cf21c7f8b3b3cf2af6db55e798bb51527d00" Feb 28 03:46:04 crc kubenswrapper[4624]: I0228 03:46:04.211882 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537506-t48b7" Feb 28 03:46:04 crc kubenswrapper[4624]: I0228 03:46:04.626794 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537500-rfqsj"] Feb 28 03:46:04 crc kubenswrapper[4624]: I0228 03:46:04.636787 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537500-rfqsj"] Feb 28 03:46:06 crc kubenswrapper[4624]: I0228 03:46:06.095718 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3585a2f-987d-439f-af8d-24734fd1c702" path="/var/lib/kubelet/pods/d3585a2f-987d-439f-af8d-24734fd1c702/volumes" Feb 28 03:46:19 crc kubenswrapper[4624]: I0228 03:46:19.539748 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:46:19 crc kubenswrapper[4624]: I0228 03:46:19.540491 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:46:19 crc kubenswrapper[4624]: I0228 03:46:19.540560 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:46:19 crc kubenswrapper[4624]: I0228 03:46:19.541268 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1dbd0294073b9fff5d98ecfa7b194254718a0891e28c241ce447d5bdb488d8e1"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:46:19 crc kubenswrapper[4624]: I0228 03:46:19.541350 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://1dbd0294073b9fff5d98ecfa7b194254718a0891e28c241ce447d5bdb488d8e1" gracePeriod=600 Feb 28 03:46:20 crc kubenswrapper[4624]: I0228 03:46:20.328122 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="1dbd0294073b9fff5d98ecfa7b194254718a0891e28c241ce447d5bdb488d8e1" exitCode=0 Feb 28 03:46:20 crc kubenswrapper[4624]: I0228 03:46:20.328185 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"1dbd0294073b9fff5d98ecfa7b194254718a0891e28c241ce447d5bdb488d8e1"} Feb 28 03:46:20 crc kubenswrapper[4624]: I0228 03:46:20.328764 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"d90ea216a2f4b67d549472e18b2176a4478f7a69481157402ae530c48f3b1213"} Feb 28 03:46:20 crc kubenswrapper[4624]: I0228 03:46:20.328793 4624 scope.go:117] "RemoveContainer" containerID="e168d945aae1509ec586eba77f4d6730480448e3bb4c6899a07204dd97ea7324" Feb 28 03:46:46 crc kubenswrapper[4624]: I0228 03:46:46.800347 4624 scope.go:117] "RemoveContainer" containerID="6ccbd3acbc14da951585b4b617d1a1bdb7cbd928e0ef0c1254fb6916fa591e8a" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.804391 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2"] Feb 28 03:46:50 crc kubenswrapper[4624]: E0228 03:46:50.805167 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e032ea25-286f-4ab4-93dc-2f1aefee2245" containerName="oc" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.805181 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e032ea25-286f-4ab4-93dc-2f1aefee2245" containerName="oc" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.805309 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e032ea25-286f-4ab4-93dc-2f1aefee2245" containerName="oc" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.805730 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.808767 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.808856 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.809519 4624 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-h2vdb" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.815638 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2"] Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.851922 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-pbztg"] Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.852699 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pbztg" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.860184 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q7sw6"] Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.860950 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.862587 4624 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vxfld" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.867450 4624 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s7jpp" Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.885226 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q7sw6"] Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.912308 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pbztg"] Feb 28 03:46:50 crc kubenswrapper[4624]: I0228 03:46:50.914661 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7s4g\" (UniqueName: \"kubernetes.io/projected/18477b71-69e7-4103-949d-4c377e3f9246-kube-api-access-f7s4g\") pod \"cert-manager-cainjector-cf98fcc89-t4xf2\" (UID: \"18477b71-69e7-4103-949d-4c377e3f9246\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.017404 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wk2\" (UniqueName: \"kubernetes.io/projected/c3c9f58c-1f61-4731-b062-8bc0f3044e68-kube-api-access-64wk2\") pod \"cert-manager-858654f9db-pbztg\" (UID: \"c3c9f58c-1f61-4731-b062-8bc0f3044e68\") " pod="cert-manager/cert-manager-858654f9db-pbztg" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.017532 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7s4g\" (UniqueName: \"kubernetes.io/projected/18477b71-69e7-4103-949d-4c377e3f9246-kube-api-access-f7s4g\") pod \"cert-manager-cainjector-cf98fcc89-t4xf2\" (UID: \"18477b71-69e7-4103-949d-4c377e3f9246\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.017960 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kl9t\" (UniqueName: \"kubernetes.io/projected/4ca9316a-d88d-402c-a943-f858bc793848-kube-api-access-5kl9t\") pod \"cert-manager-webhook-687f57d79b-q7sw6\" (UID: \"4ca9316a-d88d-402c-a943-f858bc793848\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.047659 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7s4g\" (UniqueName: \"kubernetes.io/projected/18477b71-69e7-4103-949d-4c377e3f9246-kube-api-access-f7s4g\") pod \"cert-manager-cainjector-cf98fcc89-t4xf2\" (UID: \"18477b71-69e7-4103-949d-4c377e3f9246\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.119243 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64wk2\" (UniqueName: \"kubernetes.io/projected/c3c9f58c-1f61-4731-b062-8bc0f3044e68-kube-api-access-64wk2\") pod \"cert-manager-858654f9db-pbztg\" (UID: \"c3c9f58c-1f61-4731-b062-8bc0f3044e68\") " pod="cert-manager/cert-manager-858654f9db-pbztg" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.119316 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kl9t\" (UniqueName: \"kubernetes.io/projected/4ca9316a-d88d-402c-a943-f858bc793848-kube-api-access-5kl9t\") pod \"cert-manager-webhook-687f57d79b-q7sw6\" (UID: \"4ca9316a-d88d-402c-a943-f858bc793848\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.129546 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.145184 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kl9t\" (UniqueName: \"kubernetes.io/projected/4ca9316a-d88d-402c-a943-f858bc793848-kube-api-access-5kl9t\") pod \"cert-manager-webhook-687f57d79b-q7sw6\" (UID: \"4ca9316a-d88d-402c-a943-f858bc793848\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.150967 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64wk2\" (UniqueName: \"kubernetes.io/projected/c3c9f58c-1f61-4731-b062-8bc0f3044e68-kube-api-access-64wk2\") pod \"cert-manager-858654f9db-pbztg\" (UID: \"c3c9f58c-1f61-4731-b062-8bc0f3044e68\") " pod="cert-manager/cert-manager-858654f9db-pbztg" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.170880 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pbztg" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.177460 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.401899 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2"] Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.462113 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pbztg"] Feb 28 03:46:51 crc kubenswrapper[4624]: W0228 03:46:51.465420 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3c9f58c_1f61_4731_b062_8bc0f3044e68.slice/crio-08347c13b398fa67067614ea9a0aae834183e081959f91097a02cf2b87d182aa WatchSource:0}: Error finding container 08347c13b398fa67067614ea9a0aae834183e081959f91097a02cf2b87d182aa: Status 404 returned error can't find the container with id 08347c13b398fa67067614ea9a0aae834183e081959f91097a02cf2b87d182aa Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.494849 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q7sw6"] Feb 28 03:46:51 crc kubenswrapper[4624]: W0228 03:46:51.496540 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ca9316a_d88d_402c_a943_f858bc793848.slice/crio-a609df8eb6a560f3a0f09e99399abaadd2781766a131e5ad73f2413a2288bc1f WatchSource:0}: Error finding container a609df8eb6a560f3a0f09e99399abaadd2781766a131e5ad73f2413a2288bc1f: Status 404 returned error can't find the container with id a609df8eb6a560f3a0f09e99399abaadd2781766a131e5ad73f2413a2288bc1f Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.574436 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" event={"ID":"18477b71-69e7-4103-949d-4c377e3f9246","Type":"ContainerStarted","Data":"71f04b5f8a397a9c37c24b4f901963c6a912b0cbed18c0d023c244b6331f9d60"} Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.575753 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pbztg" event={"ID":"c3c9f58c-1f61-4731-b062-8bc0f3044e68","Type":"ContainerStarted","Data":"08347c13b398fa67067614ea9a0aae834183e081959f91097a02cf2b87d182aa"} Feb 28 03:46:51 crc kubenswrapper[4624]: I0228 03:46:51.576854 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" event={"ID":"4ca9316a-d88d-402c-a943-f858bc793848","Type":"ContainerStarted","Data":"a609df8eb6a560f3a0f09e99399abaadd2781766a131e5ad73f2413a2288bc1f"} Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.617342 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" event={"ID":"18477b71-69e7-4103-949d-4c377e3f9246","Type":"ContainerStarted","Data":"d5bb4189b3e052bbc1900a999e1598e8094e4982760d3b41d78d9667ab15bdcb"} Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.621962 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pbztg" event={"ID":"c3c9f58c-1f61-4731-b062-8bc0f3044e68","Type":"ContainerStarted","Data":"09338d6a281c6ace8221b65949990365e847710c77e1acebc90a600a71769b25"} Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.625904 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" event={"ID":"4ca9316a-d88d-402c-a943-f858bc793848","Type":"ContainerStarted","Data":"46f989a5f6f7524bf009ea9978d29156a1c125446a4c232d1852524f8f47fe72"} Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.626056 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.640937 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-t4xf2" podStartSLOduration=2.598728037 podStartE2EDuration="6.640914005s" podCreationTimestamp="2026-02-28 03:46:50 +0000 UTC" firstStartedPulling="2026-02-28 03:46:51.413311954 +0000 UTC m=+666.077351253" lastFinishedPulling="2026-02-28 03:46:55.455497882 +0000 UTC m=+670.119537221" observedRunningTime="2026-02-28 03:46:56.636331091 +0000 UTC m=+671.300370410" watchObservedRunningTime="2026-02-28 03:46:56.640914005 +0000 UTC m=+671.304953324" Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.686778 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-pbztg" podStartSLOduration=2.603898118 podStartE2EDuration="6.686744401s" podCreationTimestamp="2026-02-28 03:46:50 +0000 UTC" firstStartedPulling="2026-02-28 03:46:51.468183575 +0000 UTC m=+666.132222884" lastFinishedPulling="2026-02-28 03:46:55.551029808 +0000 UTC m=+670.215069167" observedRunningTime="2026-02-28 03:46:56.66499409 +0000 UTC m=+671.329033419" watchObservedRunningTime="2026-02-28 03:46:56.686744401 +0000 UTC m=+671.350783720" Feb 28 03:46:56 crc kubenswrapper[4624]: I0228 03:46:56.729216 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" podStartSLOduration=2.771367628 podStartE2EDuration="6.729191484s" podCreationTimestamp="2026-02-28 03:46:50 +0000 UTC" firstStartedPulling="2026-02-28 03:46:51.4996608 +0000 UTC m=+666.163700109" lastFinishedPulling="2026-02-28 03:46:55.457484616 +0000 UTC m=+670.121523965" observedRunningTime="2026-02-28 03:46:56.715807431 +0000 UTC m=+671.379846740" watchObservedRunningTime="2026-02-28 03:46:56.729191484 +0000 UTC m=+671.393230803" Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.630201 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hd6z8"] Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631182 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-controller" containerID="cri-o://5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631668 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="sbdb" containerID="cri-o://ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631718 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="nbdb" containerID="cri-o://57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631764 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="northd" containerID="cri-o://3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631811 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631860 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-node" containerID="cri-o://5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.631911 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-acl-logging" containerID="cri-o://596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.679934 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovnkube-controller" containerID="cri-o://f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" gracePeriod=30 Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.983019 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hd6z8_54aef42d-7730-464b-90c7-1d8bdf5e622c/ovn-acl-logging/0.log" Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.984052 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hd6z8_54aef42d-7730-464b-90c7-1d8bdf5e622c/ovn-controller/0.log" Feb 28 03:47:00 crc kubenswrapper[4624]: I0228 03:47:00.984694 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.051765 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jfg72"] Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052039 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="sbdb" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052055 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="sbdb" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052072 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-node" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052101 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-node" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052113 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovnkube-controller" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052121 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovnkube-controller" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052133 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="nbdb" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052141 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="nbdb" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052165 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kubecfg-setup" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052173 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kubecfg-setup" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052188 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-controller" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052196 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-controller" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052208 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052216 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052227 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="northd" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052235 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="northd" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.052245 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-acl-logging" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052253 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-acl-logging" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052391 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="nbdb" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052411 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="sbdb" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052419 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovnkube-controller" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052431 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-node" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052445 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="northd" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052456 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-controller" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052467 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.052479 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerName="ovn-acl-logging" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.054997 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.092227 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-etc-openvswitch\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.092750 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-netns\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.092896 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-config\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.092954 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-node-log\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.092975 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-netd\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.093002 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-script-lib\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.093035 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-systemd-units\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.094042 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-slash\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.094071 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-systemd\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.094250 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-ovn\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.094404 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-openvswitch\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.094603 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovn-node-metrics-cert\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.094640 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-env-overrides\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095148 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-var-lib-openvswitch\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095294 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095346 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095377 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095360 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-bin\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095465 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095879 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095927 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-node-log" (OuterVolumeSpecName: "node-log") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.095952 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096133 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-log-socket\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096173 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-kubelet\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096201 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-ovn-kubernetes\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096223 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096230 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhhhz\" (UniqueName: \"kubernetes.io/projected/54aef42d-7730-464b-90c7-1d8bdf5e622c-kube-api-access-mhhhz\") pod \"54aef42d-7730-464b-90c7-1d8bdf5e622c\" (UID: \"54aef42d-7730-464b-90c7-1d8bdf5e622c\") " Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096434 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096473 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-slash" (OuterVolumeSpecName: "host-slash") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096641 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-cni-netd\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096682 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-env-overrides\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096749 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-systemd\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096775 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-node-log\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096800 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096826 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws7x\" (UniqueName: \"kubernetes.io/projected/fb229076-6ce9-450d-8c7f-99d5666b416d-kube-api-access-pws7x\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096860 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-systemd-units\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096895 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-slash\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096930 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-cni-bin\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.096960 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-log-socket\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.097007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-var-lib-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.097661 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-kubelet\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.097828 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb229076-6ce9-450d-8c7f-99d5666b416d-ovn-node-metrics-cert\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.097992 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-ovn\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.098151 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-ovnkube-config\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.098312 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-ovnkube-script-lib\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.098462 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-run-netns\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.098617 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.098766 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-etc-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099140 4624 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099269 4624 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099293 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099309 4624 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-node-log\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099433 4624 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099456 4624 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099471 4624 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099594 4624 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-slash\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099620 4624 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54aef42d-7730-464b-90c7-1d8bdf5e622c-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.099795 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100202 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100237 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100293 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100365 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100403 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-log-socket" (OuterVolumeSpecName: "log-socket") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100534 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.100707 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.104721 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54aef42d-7730-464b-90c7-1d8bdf5e622c-kube-api-access-mhhhz" (OuterVolumeSpecName: "kube-api-access-mhhhz") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "kube-api-access-mhhhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.104764 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.135754 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "54aef42d-7730-464b-90c7-1d8bdf5e622c" (UID: "54aef42d-7730-464b-90c7-1d8bdf5e622c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.181796 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q7sw6" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.201853 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-cni-netd\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.202005 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-env-overrides\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.201945 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-cni-netd\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.202843 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-env-overrides\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.203196 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-node-log\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206192 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-node-log\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206506 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.203227 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206631 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-systemd\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206666 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206708 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws7x\" (UniqueName: \"kubernetes.io/projected/fb229076-6ce9-450d-8c7f-99d5666b416d-kube-api-access-pws7x\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206771 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-systemd\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.206839 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.207954 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-systemd-units\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208028 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-slash\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208108 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-cni-bin\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208172 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-log-socket\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208238 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-var-lib-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208292 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-kubelet\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208353 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb229076-6ce9-450d-8c7f-99d5666b416d-ovn-node-metrics-cert\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208407 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-ovn\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208438 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-ovnkube-config\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208495 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-ovnkube-script-lib\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208518 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-run-netns\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208544 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208570 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-etc-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208645 4624 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208659 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54aef42d-7730-464b-90c7-1d8bdf5e622c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208676 4624 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208686 4624 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208696 4624 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208706 4624 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-log-socket\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208717 4624 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208733 4624 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208748 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhhhz\" (UniqueName: \"kubernetes.io/projected/54aef42d-7730-464b-90c7-1d8bdf5e622c-kube-api-access-mhhhz\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208764 4624 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208775 4624 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/54aef42d-7730-464b-90c7-1d8bdf5e622c-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208809 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-etc-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-systemd-units\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208862 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-slash\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.208885 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-cni-bin\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.209584 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-run-ovn\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.209669 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-kubelet\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.209878 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-var-lib-openvswitch\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.209917 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-run-netns\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.210272 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.210582 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-ovnkube-config\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.211227 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb229076-6ce9-450d-8c7f-99d5666b416d-ovnkube-script-lib\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.211274 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb229076-6ce9-450d-8c7f-99d5666b416d-log-socket\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.215333 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb229076-6ce9-450d-8c7f-99d5666b416d-ovn-node-metrics-cert\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.230141 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws7x\" (UniqueName: \"kubernetes.io/projected/fb229076-6ce9-450d-8c7f-99d5666b416d-kube-api-access-pws7x\") pod \"ovnkube-node-jfg72\" (UID: \"fb229076-6ce9-450d-8c7f-99d5666b416d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.369617 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.671645 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p5wwn_e8725d1d-2c0b-4f59-8489-f5f38f8e4d77/kube-multus/0.log" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.671702 4624 generic.go:334] "Generic (PLEG): container finished" podID="e8725d1d-2c0b-4f59-8489-f5f38f8e4d77" containerID="cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5" exitCode=2 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.671770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p5wwn" event={"ID":"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77","Type":"ContainerDied","Data":"cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.672428 4624 scope.go:117] "RemoveContainer" containerID="cc54e0a00b9d971c87e4bcad6c0d66ebc9a531c98051e13daf0a9758bc6aaba5" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.676222 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hd6z8_54aef42d-7730-464b-90c7-1d8bdf5e622c/ovn-acl-logging/0.log" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.679552 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hd6z8_54aef42d-7730-464b-90c7-1d8bdf5e622c/ovn-controller/0.log" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680007 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680031 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680041 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680048 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680056 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680065 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680072 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" exitCode=143 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680102 4624 generic.go:334] "Generic (PLEG): container finished" podID="54aef42d-7730-464b-90c7-1d8bdf5e622c" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" exitCode=143 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680113 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680170 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680185 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680199 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680212 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680226 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680241 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680255 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680263 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680275 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680288 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680296 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680303 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680310 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680318 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680325 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680264 4624 scope.go:117] "RemoveContainer" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680333 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680426 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680435 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680444 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680456 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680461 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680467 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680472 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680477 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680481 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680486 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680492 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680497 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680504 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" event={"ID":"54aef42d-7730-464b-90c7-1d8bdf5e622c","Type":"ContainerDied","Data":"979ebb3d7c3678d4131030b32d99891f3b710cb32fadc7463cd1a555e7e7d56f"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680513 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680520 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680525 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680530 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680535 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680540 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680545 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680551 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680556 4624 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.680905 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hd6z8" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.682511 4624 generic.go:334] "Generic (PLEG): container finished" podID="fb229076-6ce9-450d-8c7f-99d5666b416d" containerID="21c227a025647b49a4812db0d6d1278eaa72cf8fc413681b083d0f93c0afb1ca" exitCode=0 Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.682545 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerDied","Data":"21c227a025647b49a4812db0d6d1278eaa72cf8fc413681b083d0f93c0afb1ca"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.682577 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"ec4a57b1eb98d63ea7a25db3271cbbb344bad7f551e5980f7b1c2aa2850238ce"} Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.705971 4624 scope.go:117] "RemoveContainer" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.743561 4624 scope.go:117] "RemoveContainer" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.777808 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hd6z8"] Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.778322 4624 scope.go:117] "RemoveContainer" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.783293 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hd6z8"] Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.829241 4624 scope.go:117] "RemoveContainer" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.858226 4624 scope.go:117] "RemoveContainer" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.879282 4624 scope.go:117] "RemoveContainer" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.904886 4624 scope.go:117] "RemoveContainer" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.935267 4624 scope.go:117] "RemoveContainer" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.967523 4624 scope.go:117] "RemoveContainer" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.968158 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": container with ID starting with f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f not found: ID does not exist" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.968274 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} err="failed to get container status \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": rpc error: code = NotFound desc = could not find container \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": container with ID starting with f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.968333 4624 scope.go:117] "RemoveContainer" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.970217 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": container with ID starting with ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089 not found: ID does not exist" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.970272 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} err="failed to get container status \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": rpc error: code = NotFound desc = could not find container \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": container with ID starting with ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.970308 4624 scope.go:117] "RemoveContainer" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.971735 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": container with ID starting with 57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e not found: ID does not exist" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.971854 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} err="failed to get container status \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": rpc error: code = NotFound desc = could not find container \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": container with ID starting with 57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.971931 4624 scope.go:117] "RemoveContainer" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.972386 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": container with ID starting with 3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0 not found: ID does not exist" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.972459 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} err="failed to get container status \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": rpc error: code = NotFound desc = could not find container \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": container with ID starting with 3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.972509 4624 scope.go:117] "RemoveContainer" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.972842 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": container with ID starting with 1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c not found: ID does not exist" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.972941 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} err="failed to get container status \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": rpc error: code = NotFound desc = could not find container \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": container with ID starting with 1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.973193 4624 scope.go:117] "RemoveContainer" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.973600 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": container with ID starting with 5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599 not found: ID does not exist" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.973734 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} err="failed to get container status \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": rpc error: code = NotFound desc = could not find container \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": container with ID starting with 5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.973834 4624 scope.go:117] "RemoveContainer" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.984622 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": container with ID starting with 596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a not found: ID does not exist" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.984852 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} err="failed to get container status \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": rpc error: code = NotFound desc = could not find container \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": container with ID starting with 596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.984949 4624 scope.go:117] "RemoveContainer" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.985515 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": container with ID starting with 5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba not found: ID does not exist" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.985599 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} err="failed to get container status \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": rpc error: code = NotFound desc = could not find container \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": container with ID starting with 5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.985693 4624 scope.go:117] "RemoveContainer" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" Feb 28 03:47:01 crc kubenswrapper[4624]: E0228 03:47:01.987471 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": container with ID starting with 7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53 not found: ID does not exist" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.987572 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} err="failed to get container status \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": rpc error: code = NotFound desc = could not find container \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": container with ID starting with 7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.987647 4624 scope.go:117] "RemoveContainer" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.987967 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} err="failed to get container status \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": rpc error: code = NotFound desc = could not find container \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": container with ID starting with f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.988037 4624 scope.go:117] "RemoveContainer" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.988333 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} err="failed to get container status \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": rpc error: code = NotFound desc = could not find container \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": container with ID starting with ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.988422 4624 scope.go:117] "RemoveContainer" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.988737 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} err="failed to get container status \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": rpc error: code = NotFound desc = could not find container \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": container with ID starting with 57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.988819 4624 scope.go:117] "RemoveContainer" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.989109 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} err="failed to get container status \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": rpc error: code = NotFound desc = could not find container \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": container with ID starting with 3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.989179 4624 scope.go:117] "RemoveContainer" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.989727 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} err="failed to get container status \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": rpc error: code = NotFound desc = could not find container \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": container with ID starting with 1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.989816 4624 scope.go:117] "RemoveContainer" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.990189 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} err="failed to get container status \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": rpc error: code = NotFound desc = could not find container \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": container with ID starting with 5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.990269 4624 scope.go:117] "RemoveContainer" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.990660 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} err="failed to get container status \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": rpc error: code = NotFound desc = could not find container \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": container with ID starting with 596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.990751 4624 scope.go:117] "RemoveContainer" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.991472 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} err="failed to get container status \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": rpc error: code = NotFound desc = could not find container \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": container with ID starting with 5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.991561 4624 scope.go:117] "RemoveContainer" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.991901 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} err="failed to get container status \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": rpc error: code = NotFound desc = could not find container \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": container with ID starting with 7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.991980 4624 scope.go:117] "RemoveContainer" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.992303 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} err="failed to get container status \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": rpc error: code = NotFound desc = could not find container \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": container with ID starting with f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.992383 4624 scope.go:117] "RemoveContainer" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.992708 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} err="failed to get container status \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": rpc error: code = NotFound desc = could not find container \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": container with ID starting with ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.992789 4624 scope.go:117] "RemoveContainer" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.993048 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} err="failed to get container status \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": rpc error: code = NotFound desc = could not find container \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": container with ID starting with 57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.993143 4624 scope.go:117] "RemoveContainer" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.993418 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} err="failed to get container status \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": rpc error: code = NotFound desc = could not find container \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": container with ID starting with 3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.993504 4624 scope.go:117] "RemoveContainer" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.993893 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} err="failed to get container status \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": rpc error: code = NotFound desc = could not find container \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": container with ID starting with 1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.993974 4624 scope.go:117] "RemoveContainer" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.994366 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} err="failed to get container status \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": rpc error: code = NotFound desc = could not find container \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": container with ID starting with 5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.994442 4624 scope.go:117] "RemoveContainer" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.994864 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} err="failed to get container status \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": rpc error: code = NotFound desc = could not find container \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": container with ID starting with 596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.994960 4624 scope.go:117] "RemoveContainer" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.995375 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} err="failed to get container status \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": rpc error: code = NotFound desc = could not find container \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": container with ID starting with 5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.995457 4624 scope.go:117] "RemoveContainer" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.996589 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} err="failed to get container status \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": rpc error: code = NotFound desc = could not find container \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": container with ID starting with 7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.996675 4624 scope.go:117] "RemoveContainer" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.997078 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} err="failed to get container status \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": rpc error: code = NotFound desc = could not find container \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": container with ID starting with f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.997233 4624 scope.go:117] "RemoveContainer" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.997586 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} err="failed to get container status \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": rpc error: code = NotFound desc = could not find container \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": container with ID starting with ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.997653 4624 scope.go:117] "RemoveContainer" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.997962 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} err="failed to get container status \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": rpc error: code = NotFound desc = could not find container \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": container with ID starting with 57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.998036 4624 scope.go:117] "RemoveContainer" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.998381 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} err="failed to get container status \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": rpc error: code = NotFound desc = could not find container \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": container with ID starting with 3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.998495 4624 scope.go:117] "RemoveContainer" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.999149 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} err="failed to get container status \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": rpc error: code = NotFound desc = could not find container \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": container with ID starting with 1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.999247 4624 scope.go:117] "RemoveContainer" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.999839 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} err="failed to get container status \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": rpc error: code = NotFound desc = could not find container \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": container with ID starting with 5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599 not found: ID does not exist" Feb 28 03:47:01 crc kubenswrapper[4624]: I0228 03:47:01.999871 4624 scope.go:117] "RemoveContainer" containerID="596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.000260 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a"} err="failed to get container status \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": rpc error: code = NotFound desc = could not find container \"596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a\": container with ID starting with 596dc1beda8c5ccca88020c6c9d12b0720972e48f38feb0dd4da51661b063a4a not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.000377 4624 scope.go:117] "RemoveContainer" containerID="5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.000694 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba"} err="failed to get container status \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": rpc error: code = NotFound desc = could not find container \"5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba\": container with ID starting with 5bd2e96029ae81173c6a23867e3b811ca364635cb19675adbc5c059e7baac0ba not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.000731 4624 scope.go:117] "RemoveContainer" containerID="7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.001104 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53"} err="failed to get container status \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": rpc error: code = NotFound desc = could not find container \"7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53\": container with ID starting with 7568e1e449422d53351487689fd7c13ffbcb06c5404a873ef332104474ee4b53 not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.001220 4624 scope.go:117] "RemoveContainer" containerID="f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.002331 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f"} err="failed to get container status \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": rpc error: code = NotFound desc = could not find container \"f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f\": container with ID starting with f65bb26e0f62e59e107e40fe9a14326d9236fe7bbc7da77aa191cfb93d6e1f6f not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.002389 4624 scope.go:117] "RemoveContainer" containerID="ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.002724 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089"} err="failed to get container status \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": rpc error: code = NotFound desc = could not find container \"ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089\": container with ID starting with ba378676b92ba3837647b0022b1c5bf771fd2bdaff87a7224df74b1f08d39089 not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.002757 4624 scope.go:117] "RemoveContainer" containerID="57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.003039 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e"} err="failed to get container status \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": rpc error: code = NotFound desc = could not find container \"57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e\": container with ID starting with 57dc32c9792289685a4a1dd86839be369ecea948bf578f3ffefb31022a80e59e not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.003063 4624 scope.go:117] "RemoveContainer" containerID="3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.003348 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0"} err="failed to get container status \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": rpc error: code = NotFound desc = could not find container \"3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0\": container with ID starting with 3ed43dd281becfb9f9f9a93e667a5186555acffaeea08ca4fd2c2973ca85d5a0 not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.003384 4624 scope.go:117] "RemoveContainer" containerID="1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.007243 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c"} err="failed to get container status \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": rpc error: code = NotFound desc = could not find container \"1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c\": container with ID starting with 1b5df6194c6b7697fe63cca84ea58c26680482ff0cb6a4307ab043825f82a92c not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.007273 4624 scope.go:117] "RemoveContainer" containerID="5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.007612 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599"} err="failed to get container status \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": rpc error: code = NotFound desc = could not find container \"5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599\": container with ID starting with 5183c29e908ad349d678e252fd48e6dff13069f42f5e8b0a2ec7bbf2fbc7f599 not found: ID does not exist" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.101574 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54aef42d-7730-464b-90c7-1d8bdf5e622c" path="/var/lib/kubelet/pods/54aef42d-7730-464b-90c7-1d8bdf5e622c/volumes" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.694327 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p5wwn_e8725d1d-2c0b-4f59-8489-f5f38f8e4d77/kube-multus/0.log" Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.695860 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p5wwn" event={"ID":"e8725d1d-2c0b-4f59-8489-f5f38f8e4d77","Type":"ContainerStarted","Data":"d458550de3c485239d0b454aa1a00658323407c3ae0c048a595523740eeef897"} Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.703419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"83190a658e86d2e9a0751584484909b03465445316e465d1a88f488fa09f8244"} Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.703488 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"7c3be52e0502537f42feaf2676647de462cf49c4658b6e72b88dad6f612df607"} Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.703505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"42fdbfbf980ac82a3610236a166e5b5bb201b549f91c3ab0351bc46cd7f31f95"} Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.703523 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"20107591b28cc61860ab01a092c9494d9db3405fdbdfd3e3b40e8414d9751336"} Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.703537 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"3532a7eb3e5023e40af1c7006cc244b561b8fb91b366ad58215f101f73a0051a"} Feb 28 03:47:02 crc kubenswrapper[4624]: I0228 03:47:02.703549 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"ab8fe2ac8676c8ca58a0953d21abd5b4f51d0b5b0ca94a3798aebc5150aee635"} Feb 28 03:47:05 crc kubenswrapper[4624]: I0228 03:47:05.734709 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"72311762fba127e042005619def0d122a8f0a4320c080b7a2f013f1824c6eeda"} Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.753020 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" event={"ID":"fb229076-6ce9-450d-8c7f-99d5666b416d","Type":"ContainerStarted","Data":"320501c5191aeb373fe4a2594f45de7f516a068a685fc0ad1985d2596444cd46"} Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.753568 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.753584 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.753593 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.813704 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" podStartSLOduration=6.813685728 podStartE2EDuration="6.813685728s" podCreationTimestamp="2026-02-28 03:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:47:07.809612047 +0000 UTC m=+682.473651356" watchObservedRunningTime="2026-02-28 03:47:07.813685728 +0000 UTC m=+682.477725027" Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.829398 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:07 crc kubenswrapper[4624]: I0228 03:47:07.842446 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:31 crc kubenswrapper[4624]: I0228 03:47:31.414109 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jfg72" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.812748 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb"] Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.815006 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.818321 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.847738 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb"] Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.875119 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.875172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.875370 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2qxw\" (UniqueName: \"kubernetes.io/projected/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-kube-api-access-q2qxw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.976918 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.976998 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.977139 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2qxw\" (UniqueName: \"kubernetes.io/projected/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-kube-api-access-q2qxw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.977437 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:39 crc kubenswrapper[4624]: I0228 03:47:39.977900 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:40 crc kubenswrapper[4624]: I0228 03:47:40.004231 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2qxw\" (UniqueName: \"kubernetes.io/projected/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-kube-api-access-q2qxw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:40 crc kubenswrapper[4624]: I0228 03:47:40.146708 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:40 crc kubenswrapper[4624]: I0228 03:47:40.389230 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb"] Feb 28 03:47:40 crc kubenswrapper[4624]: I0228 03:47:40.993260 4624 generic.go:334] "Generic (PLEG): container finished" podID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerID="84aa6b21d6d1e41dc53eeeddd085b158128dc58eefc576913debee113d725c7c" exitCode=0 Feb 28 03:47:40 crc kubenswrapper[4624]: I0228 03:47:40.993322 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" event={"ID":"38b1807a-dd56-4dbe-9794-5f2de7b1b33f","Type":"ContainerDied","Data":"84aa6b21d6d1e41dc53eeeddd085b158128dc58eefc576913debee113d725c7c"} Feb 28 03:47:40 crc kubenswrapper[4624]: I0228 03:47:40.993732 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" event={"ID":"38b1807a-dd56-4dbe-9794-5f2de7b1b33f","Type":"ContainerStarted","Data":"5ee89fc5ffdf2ffeb9f17e783c962ac4330e7a3ac764deeefefb44d6a5d04c40"} Feb 28 03:47:43 crc kubenswrapper[4624]: I0228 03:47:43.009019 4624 generic.go:334] "Generic (PLEG): container finished" podID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerID="0e48fb36ef6cbfb45e09b506944bd66fc0e64fe7b84f63054aec485b4bc71a35" exitCode=0 Feb 28 03:47:43 crc kubenswrapper[4624]: I0228 03:47:43.009154 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" event={"ID":"38b1807a-dd56-4dbe-9794-5f2de7b1b33f","Type":"ContainerDied","Data":"0e48fb36ef6cbfb45e09b506944bd66fc0e64fe7b84f63054aec485b4bc71a35"} Feb 28 03:47:44 crc kubenswrapper[4624]: I0228 03:47:44.018377 4624 generic.go:334] "Generic (PLEG): container finished" podID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerID="e2e01d22c97cbaac67b6cc4c0ebb97dfdb04706aaa5e41c42c8e51a5bac19e46" exitCode=0 Feb 28 03:47:44 crc kubenswrapper[4624]: I0228 03:47:44.018719 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" event={"ID":"38b1807a-dd56-4dbe-9794-5f2de7b1b33f","Type":"ContainerDied","Data":"e2e01d22c97cbaac67b6cc4c0ebb97dfdb04706aaa5e41c42c8e51a5bac19e46"} Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.296783 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.373593 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-util\") pod \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.373697 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-bundle\") pod \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.373768 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2qxw\" (UniqueName: \"kubernetes.io/projected/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-kube-api-access-q2qxw\") pod \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\" (UID: \"38b1807a-dd56-4dbe-9794-5f2de7b1b33f\") " Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.375753 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-bundle" (OuterVolumeSpecName: "bundle") pod "38b1807a-dd56-4dbe-9794-5f2de7b1b33f" (UID: "38b1807a-dd56-4dbe-9794-5f2de7b1b33f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.383309 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-kube-api-access-q2qxw" (OuterVolumeSpecName: "kube-api-access-q2qxw") pod "38b1807a-dd56-4dbe-9794-5f2de7b1b33f" (UID: "38b1807a-dd56-4dbe-9794-5f2de7b1b33f"). InnerVolumeSpecName "kube-api-access-q2qxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.390850 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-util" (OuterVolumeSpecName: "util") pod "38b1807a-dd56-4dbe-9794-5f2de7b1b33f" (UID: "38b1807a-dd56-4dbe-9794-5f2de7b1b33f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.475713 4624 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.475753 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2qxw\" (UniqueName: \"kubernetes.io/projected/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-kube-api-access-q2qxw\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:45 crc kubenswrapper[4624]: I0228 03:47:45.475765 4624 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b1807a-dd56-4dbe-9794-5f2de7b1b33f-util\") on node \"crc\" DevicePath \"\"" Feb 28 03:47:46 crc kubenswrapper[4624]: I0228 03:47:46.033618 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" event={"ID":"38b1807a-dd56-4dbe-9794-5f2de7b1b33f","Type":"ContainerDied","Data":"5ee89fc5ffdf2ffeb9f17e783c962ac4330e7a3ac764deeefefb44d6a5d04c40"} Feb 28 03:47:46 crc kubenswrapper[4624]: I0228 03:47:46.033683 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee89fc5ffdf2ffeb9f17e783c962ac4330e7a3ac764deeefefb44d6a5d04c40" Feb 28 03:47:46 crc kubenswrapper[4624]: I0228 03:47:46.033782 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.569530 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks"] Feb 28 03:47:48 crc kubenswrapper[4624]: E0228 03:47:48.570230 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="pull" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.570252 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="pull" Feb 28 03:47:48 crc kubenswrapper[4624]: E0228 03:47:48.570275 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="extract" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.570288 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="extract" Feb 28 03:47:48 crc kubenswrapper[4624]: E0228 03:47:48.570306 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="util" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.570318 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="util" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.570497 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b1807a-dd56-4dbe-9794-5f2de7b1b33f" containerName="extract" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.571152 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.574392 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.574437 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dk5hs" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.574537 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.577608 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks"] Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.640497 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zz8g\" (UniqueName: \"kubernetes.io/projected/54bc88fe-b7dc-43a1-b64b-60723eb0cf7c-kube-api-access-6zz8g\") pod \"nmstate-operator-75c5dccd6c-wtfks\" (UID: \"54bc88fe-b7dc-43a1-b64b-60723eb0cf7c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.741600 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zz8g\" (UniqueName: \"kubernetes.io/projected/54bc88fe-b7dc-43a1-b64b-60723eb0cf7c-kube-api-access-6zz8g\") pod \"nmstate-operator-75c5dccd6c-wtfks\" (UID: \"54bc88fe-b7dc-43a1-b64b-60723eb0cf7c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.762238 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zz8g\" (UniqueName: \"kubernetes.io/projected/54bc88fe-b7dc-43a1-b64b-60723eb0cf7c-kube-api-access-6zz8g\") pod \"nmstate-operator-75c5dccd6c-wtfks\" (UID: \"54bc88fe-b7dc-43a1-b64b-60723eb0cf7c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" Feb 28 03:47:48 crc kubenswrapper[4624]: I0228 03:47:48.889725 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" Feb 28 03:47:49 crc kubenswrapper[4624]: I0228 03:47:49.123913 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks"] Feb 28 03:47:49 crc kubenswrapper[4624]: W0228 03:47:49.133446 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54bc88fe_b7dc_43a1_b64b_60723eb0cf7c.slice/crio-57f0002f6d9f53bf2d48854df988cbc5ae8e2642709e2125718183ec3ed4cde6 WatchSource:0}: Error finding container 57f0002f6d9f53bf2d48854df988cbc5ae8e2642709e2125718183ec3ed4cde6: Status 404 returned error can't find the container with id 57f0002f6d9f53bf2d48854df988cbc5ae8e2642709e2125718183ec3ed4cde6 Feb 28 03:47:50 crc kubenswrapper[4624]: I0228 03:47:50.079810 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" event={"ID":"54bc88fe-b7dc-43a1-b64b-60723eb0cf7c","Type":"ContainerStarted","Data":"57f0002f6d9f53bf2d48854df988cbc5ae8e2642709e2125718183ec3ed4cde6"} Feb 28 03:47:53 crc kubenswrapper[4624]: I0228 03:47:53.103325 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" event={"ID":"54bc88fe-b7dc-43a1-b64b-60723eb0cf7c","Type":"ContainerStarted","Data":"c118ee6aef2a25b62f66737aa685c49d0c32426fff41bc8e01b95b90e525234b"} Feb 28 03:47:53 crc kubenswrapper[4624]: I0228 03:47:53.128987 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-wtfks" podStartSLOduration=2.293986035 podStartE2EDuration="5.128968416s" podCreationTimestamp="2026-02-28 03:47:48 +0000 UTC" firstStartedPulling="2026-02-28 03:47:49.135156845 +0000 UTC m=+723.799196154" lastFinishedPulling="2026-02-28 03:47:51.970139216 +0000 UTC m=+726.634178535" observedRunningTime="2026-02-28 03:47:53.124457352 +0000 UTC m=+727.788496671" watchObservedRunningTime="2026-02-28 03:47:53.128968416 +0000 UTC m=+727.793007745" Feb 28 03:47:53 crc kubenswrapper[4624]: I0228 03:47:53.750596 4624 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.904310 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-pd9k9"] Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.906157 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.909808 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-67c5s" Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.913600 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh"] Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.958340 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-pd9k9"] Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.958470 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.961978 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.965021 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh"] Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.969159 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-75frj"] Feb 28 03:47:59 crc kubenswrapper[4624]: I0228 03:47:59.969993 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.060819 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-dbus-socket\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.060904 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-nmstate-lock\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.060941 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-ovs-socket\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.060974 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpkdv\" (UniqueName: \"kubernetes.io/projected/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-kube-api-access-mpkdv\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.061003 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2f662548-c391-4399-adba-8fa556360cf8-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.061034 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlnx\" (UniqueName: \"kubernetes.io/projected/2f662548-c391-4399-adba-8fa556360cf8-kube-api-access-prlnx\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.061061 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qlq\" (UniqueName: \"kubernetes.io/projected/66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a-kube-api-access-w9qlq\") pod \"nmstate-metrics-69594cc75-pd9k9\" (UID: \"66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.100389 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7"] Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.101186 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.103191 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.103427 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.103565 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cxx5k" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.119264 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7"] Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.151752 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537508-srmfl"] Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.153137 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.156148 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.156370 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.156492 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.158075 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537508-srmfl"] Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164668 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpkdv\" (UniqueName: \"kubernetes.io/projected/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-kube-api-access-mpkdv\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164707 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2f662548-c391-4399-adba-8fa556360cf8-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164750 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prlnx\" (UniqueName: \"kubernetes.io/projected/2f662548-c391-4399-adba-8fa556360cf8-kube-api-access-prlnx\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164779 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9qlq\" (UniqueName: \"kubernetes.io/projected/66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a-kube-api-access-w9qlq\") pod \"nmstate-metrics-69594cc75-pd9k9\" (UID: \"66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164796 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-dbus-socket\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164839 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-nmstate-lock\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164880 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-ovs-socket\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.164941 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-ovs-socket\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.165269 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-dbus-socket\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.165305 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-nmstate-lock\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: E0228 03:48:00.166482 4624 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 28 03:48:00 crc kubenswrapper[4624]: E0228 03:48:00.166531 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f662548-c391-4399-adba-8fa556360cf8-tls-key-pair podName:2f662548-c391-4399-adba-8fa556360cf8 nodeName:}" failed. No retries permitted until 2026-02-28 03:48:00.666513479 +0000 UTC m=+735.330552788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/2f662548-c391-4399-adba-8fa556360cf8-tls-key-pair") pod "nmstate-webhook-786f45cff4-ctxdh" (UID: "2f662548-c391-4399-adba-8fa556360cf8") : secret "openshift-nmstate-webhook" not found Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.195105 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpkdv\" (UniqueName: \"kubernetes.io/projected/3abaedfc-0055-4d3d-a10c-0adf10cf8f52-kube-api-access-mpkdv\") pod \"nmstate-handler-75frj\" (UID: \"3abaedfc-0055-4d3d-a10c-0adf10cf8f52\") " pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.195343 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9qlq\" (UniqueName: \"kubernetes.io/projected/66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a-kube-api-access-w9qlq\") pod \"nmstate-metrics-69594cc75-pd9k9\" (UID: \"66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.197074 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prlnx\" (UniqueName: \"kubernetes.io/projected/2f662548-c391-4399-adba-8fa556360cf8-kube-api-access-prlnx\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.266235 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bft\" (UniqueName: \"kubernetes.io/projected/04995dc6-8837-4a1f-91df-bc058d0fb961-kube-api-access-q6bft\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.266294 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04995dc6-8837-4a1f-91df-bc058d0fb961-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.266330 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t7xp\" (UniqueName: \"kubernetes.io/projected/bcfa6f52-194d-484f-8e89-4dc2bafc8a34-kube-api-access-6t7xp\") pod \"auto-csr-approver-29537508-srmfl\" (UID: \"bcfa6f52-194d-484f-8e89-4dc2bafc8a34\") " pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.266428 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04995dc6-8837-4a1f-91df-bc058d0fb961-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.274488 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7985c7579-7bt5l"] Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.275183 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.279923 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.294998 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7985c7579-7bt5l"] Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.307204 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.367676 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bft\" (UniqueName: \"kubernetes.io/projected/04995dc6-8837-4a1f-91df-bc058d0fb961-kube-api-access-q6bft\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368000 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04995dc6-8837-4a1f-91df-bc058d0fb961-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368023 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-service-ca\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368042 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-trusted-ca-bundle\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368061 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j584d\" (UniqueName: \"kubernetes.io/projected/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-kube-api-access-j584d\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368077 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-oauth-serving-cert\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368116 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t7xp\" (UniqueName: \"kubernetes.io/projected/bcfa6f52-194d-484f-8e89-4dc2bafc8a34-kube-api-access-6t7xp\") pod \"auto-csr-approver-29537508-srmfl\" (UID: \"bcfa6f52-194d-484f-8e89-4dc2bafc8a34\") " pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368137 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-serving-cert\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368160 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-config\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368186 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-oauth-config\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.368206 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04995dc6-8837-4a1f-91df-bc058d0fb961-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: E0228 03:48:00.368309 4624 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 28 03:48:00 crc kubenswrapper[4624]: E0228 03:48:00.368346 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04995dc6-8837-4a1f-91df-bc058d0fb961-plugin-serving-cert podName:04995dc6-8837-4a1f-91df-bc058d0fb961 nodeName:}" failed. No retries permitted until 2026-02-28 03:48:00.86833218 +0000 UTC m=+735.532371479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/04995dc6-8837-4a1f-91df-bc058d0fb961-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-mnlv7" (UID: "04995dc6-8837-4a1f-91df-bc058d0fb961") : secret "plugin-serving-cert" not found Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.369395 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04995dc6-8837-4a1f-91df-bc058d0fb961-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.390025 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bft\" (UniqueName: \"kubernetes.io/projected/04995dc6-8837-4a1f-91df-bc058d0fb961-kube-api-access-q6bft\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.397184 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t7xp\" (UniqueName: \"kubernetes.io/projected/bcfa6f52-194d-484f-8e89-4dc2bafc8a34-kube-api-access-6t7xp\") pod \"auto-csr-approver-29537508-srmfl\" (UID: \"bcfa6f52-194d-484f-8e89-4dc2bafc8a34\") " pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469111 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-service-ca\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469152 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j584d\" (UniqueName: \"kubernetes.io/projected/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-kube-api-access-j584d\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469173 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-trusted-ca-bundle\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469189 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-oauth-serving-cert\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469214 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-serving-cert\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469243 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-config\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.469271 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-oauth-config\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.470649 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-service-ca\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.472797 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-oauth-serving-cert\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.473267 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.483934 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-trusted-ca-bundle\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.489806 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-oauth-config\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.490189 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-config\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.493430 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-console-serving-cert\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.496875 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j584d\" (UniqueName: \"kubernetes.io/projected/6b184da0-ae29-4c08-82bb-1a4296ddcbf7-kube-api-access-j584d\") pod \"console-7985c7579-7bt5l\" (UID: \"6b184da0-ae29-4c08-82bb-1a4296ddcbf7\") " pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.592257 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.672785 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2f662548-c391-4399-adba-8fa556360cf8-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.676641 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2f662548-c391-4399-adba-8fa556360cf8-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-ctxdh\" (UID: \"2f662548-c391-4399-adba-8fa556360cf8\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.711469 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537508-srmfl"] Feb 28 03:48:00 crc kubenswrapper[4624]: W0228 03:48:00.731747 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcfa6f52_194d_484f_8e89_4dc2bafc8a34.slice/crio-ce7910163dace3ff026cfdccc1b958b008aa629e83f70de0e76e0facb65b8a57 WatchSource:0}: Error finding container ce7910163dace3ff026cfdccc1b958b008aa629e83f70de0e76e0facb65b8a57: Status 404 returned error can't find the container with id ce7910163dace3ff026cfdccc1b958b008aa629e83f70de0e76e0facb65b8a57 Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.771243 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-pd9k9"] Feb 28 03:48:00 crc kubenswrapper[4624]: W0228 03:48:00.786414 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66ef0815_1c21_4b36_8e9e_b18d0fcc8d4a.slice/crio-a36cc4d3e55c16c9635065c8232804a74380354ea80cc34298cb44cea01be7a1 WatchSource:0}: Error finding container a36cc4d3e55c16c9635065c8232804a74380354ea80cc34298cb44cea01be7a1: Status 404 returned error can't find the container with id a36cc4d3e55c16c9635065c8232804a74380354ea80cc34298cb44cea01be7a1 Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.841629 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7985c7579-7bt5l"] Feb 28 03:48:00 crc kubenswrapper[4624]: W0228 03:48:00.848667 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b184da0_ae29_4c08_82bb_1a4296ddcbf7.slice/crio-a80cd988cacad170d0327c9d50b70700565a3645d3ce88a8f492c5c13115d296 WatchSource:0}: Error finding container a80cd988cacad170d0327c9d50b70700565a3645d3ce88a8f492c5c13115d296: Status 404 returned error can't find the container with id a80cd988cacad170d0327c9d50b70700565a3645d3ce88a8f492c5c13115d296 Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.875764 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04995dc6-8837-4a1f-91df-bc058d0fb961-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.879644 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04995dc6-8837-4a1f-91df-bc058d0fb961-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-mnlv7\" (UID: \"04995dc6-8837-4a1f-91df-bc058d0fb961\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:00 crc kubenswrapper[4624]: I0228 03:48:00.897743 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.022635 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.107462 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh"] Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.195299 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-75frj" event={"ID":"3abaedfc-0055-4d3d-a10c-0adf10cf8f52","Type":"ContainerStarted","Data":"9c3c4a74d575a3635dbd4c935c036c87eda2d6a0af1481bbd5dd37e9ba123426"} Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.196619 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" event={"ID":"66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a","Type":"ContainerStarted","Data":"a36cc4d3e55c16c9635065c8232804a74380354ea80cc34298cb44cea01be7a1"} Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.198684 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7985c7579-7bt5l" event={"ID":"6b184da0-ae29-4c08-82bb-1a4296ddcbf7","Type":"ContainerStarted","Data":"a9d1634a1afd0313fa49c07677d8026168ff80f07bd05b899eb4374f628aa0d4"} Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.198702 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7985c7579-7bt5l" event={"ID":"6b184da0-ae29-4c08-82bb-1a4296ddcbf7","Type":"ContainerStarted","Data":"a80cd988cacad170d0327c9d50b70700565a3645d3ce88a8f492c5c13115d296"} Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.200910 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537508-srmfl" event={"ID":"bcfa6f52-194d-484f-8e89-4dc2bafc8a34","Type":"ContainerStarted","Data":"ce7910163dace3ff026cfdccc1b958b008aa629e83f70de0e76e0facb65b8a57"} Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.201963 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" event={"ID":"2f662548-c391-4399-adba-8fa556360cf8","Type":"ContainerStarted","Data":"dd83a1077b39a362c19d78ed64e5ee8f821ea48ab038d190ab3402dfc6fcf772"} Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.215423 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7985c7579-7bt5l" podStartSLOduration=1.215406614 podStartE2EDuration="1.215406614s" podCreationTimestamp="2026-02-28 03:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:48:01.211825446 +0000 UTC m=+735.875864755" watchObservedRunningTime="2026-02-28 03:48:01.215406614 +0000 UTC m=+735.879445923" Feb 28 03:48:01 crc kubenswrapper[4624]: I0228 03:48:01.233093 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7"] Feb 28 03:48:01 crc kubenswrapper[4624]: W0228 03:48:01.240332 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04995dc6_8837_4a1f_91df_bc058d0fb961.slice/crio-adf4e7fd6c27c632ac362a8b3b50e17a0da95bb3ef6ee3d38d80fdfcb748f443 WatchSource:0}: Error finding container adf4e7fd6c27c632ac362a8b3b50e17a0da95bb3ef6ee3d38d80fdfcb748f443: Status 404 returned error can't find the container with id adf4e7fd6c27c632ac362a8b3b50e17a0da95bb3ef6ee3d38d80fdfcb748f443 Feb 28 03:48:02 crc kubenswrapper[4624]: I0228 03:48:02.211075 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537508-srmfl" event={"ID":"bcfa6f52-194d-484f-8e89-4dc2bafc8a34","Type":"ContainerStarted","Data":"b04e905b52161116f68e479dcebabfd6d116ea9718a06e87fdaf96e6907b6c63"} Feb 28 03:48:02 crc kubenswrapper[4624]: I0228 03:48:02.212264 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" event={"ID":"04995dc6-8837-4a1f-91df-bc058d0fb961","Type":"ContainerStarted","Data":"adf4e7fd6c27c632ac362a8b3b50e17a0da95bb3ef6ee3d38d80fdfcb748f443"} Feb 28 03:48:02 crc kubenswrapper[4624]: I0228 03:48:02.228766 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537508-srmfl" podStartSLOduration=1.217647346 podStartE2EDuration="2.228744678s" podCreationTimestamp="2026-02-28 03:48:00 +0000 UTC" firstStartedPulling="2026-02-28 03:48:00.735009972 +0000 UTC m=+735.399049281" lastFinishedPulling="2026-02-28 03:48:01.746107274 +0000 UTC m=+736.410146613" observedRunningTime="2026-02-28 03:48:02.227426311 +0000 UTC m=+736.891465620" watchObservedRunningTime="2026-02-28 03:48:02.228744678 +0000 UTC m=+736.892783987" Feb 28 03:48:03 crc kubenswrapper[4624]: I0228 03:48:03.221570 4624 generic.go:334] "Generic (PLEG): container finished" podID="bcfa6f52-194d-484f-8e89-4dc2bafc8a34" containerID="b04e905b52161116f68e479dcebabfd6d116ea9718a06e87fdaf96e6907b6c63" exitCode=0 Feb 28 03:48:03 crc kubenswrapper[4624]: I0228 03:48:03.221664 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537508-srmfl" event={"ID":"bcfa6f52-194d-484f-8e89-4dc2bafc8a34","Type":"ContainerDied","Data":"b04e905b52161116f68e479dcebabfd6d116ea9718a06e87fdaf96e6907b6c63"} Feb 28 03:48:04 crc kubenswrapper[4624]: I0228 03:48:04.509457 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:04 crc kubenswrapper[4624]: I0228 03:48:04.631839 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t7xp\" (UniqueName: \"kubernetes.io/projected/bcfa6f52-194d-484f-8e89-4dc2bafc8a34-kube-api-access-6t7xp\") pod \"bcfa6f52-194d-484f-8e89-4dc2bafc8a34\" (UID: \"bcfa6f52-194d-484f-8e89-4dc2bafc8a34\") " Feb 28 03:48:04 crc kubenswrapper[4624]: I0228 03:48:04.639243 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcfa6f52-194d-484f-8e89-4dc2bafc8a34-kube-api-access-6t7xp" (OuterVolumeSpecName: "kube-api-access-6t7xp") pod "bcfa6f52-194d-484f-8e89-4dc2bafc8a34" (UID: "bcfa6f52-194d-484f-8e89-4dc2bafc8a34"). InnerVolumeSpecName "kube-api-access-6t7xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:48:04 crc kubenswrapper[4624]: I0228 03:48:04.734032 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t7xp\" (UniqueName: \"kubernetes.io/projected/bcfa6f52-194d-484f-8e89-4dc2bafc8a34-kube-api-access-6t7xp\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.273312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-75frj" event={"ID":"3abaedfc-0055-4d3d-a10c-0adf10cf8f52","Type":"ContainerStarted","Data":"d0a009d31e71a517c27a8fa47f3b21494a997587b230652d53fa8eb8b35f0927"} Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.274593 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.281388 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" event={"ID":"66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a","Type":"ContainerStarted","Data":"1b8c1b823896d58e85352abf5ba5266322fbe16e08bcca8806f4beeef0181fa2"} Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.293017 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537508-srmfl" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.298876 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537508-srmfl" event={"ID":"bcfa6f52-194d-484f-8e89-4dc2bafc8a34","Type":"ContainerDied","Data":"ce7910163dace3ff026cfdccc1b958b008aa629e83f70de0e76e0facb65b8a57"} Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.298970 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce7910163dace3ff026cfdccc1b958b008aa629e83f70de0e76e0facb65b8a57" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.299960 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-75frj" podStartSLOduration=2.1970610170000002 podStartE2EDuration="6.299934218s" podCreationTimestamp="2026-02-28 03:47:59 +0000 UTC" firstStartedPulling="2026-02-28 03:48:00.3440477 +0000 UTC m=+735.008087009" lastFinishedPulling="2026-02-28 03:48:04.446920901 +0000 UTC m=+739.110960210" observedRunningTime="2026-02-28 03:48:05.298076117 +0000 UTC m=+739.962115436" watchObservedRunningTime="2026-02-28 03:48:05.299934218 +0000 UTC m=+739.963973537" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.306557 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" event={"ID":"04995dc6-8837-4a1f-91df-bc058d0fb961","Type":"ContainerStarted","Data":"e114aec6558bb3986d7c2f5b19ae77b1ae34b1d0c318f3f23ecea84c2391476b"} Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.310219 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" event={"ID":"2f662548-c391-4399-adba-8fa556360cf8","Type":"ContainerStarted","Data":"b005b0562d149bed4daa341b26b980487ee2fa45de6eb4d9f8d44ccd94e9b3c7"} Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.310395 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.319041 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537502-bptlf"] Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.327319 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537502-bptlf"] Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.331014 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-mnlv7" podStartSLOduration=2.134574208 podStartE2EDuration="5.330983646s" podCreationTimestamp="2026-02-28 03:48:00 +0000 UTC" firstStartedPulling="2026-02-28 03:48:01.242862522 +0000 UTC m=+735.906901831" lastFinishedPulling="2026-02-28 03:48:04.43927195 +0000 UTC m=+739.103311269" observedRunningTime="2026-02-28 03:48:05.320041204 +0000 UTC m=+739.984080523" watchObservedRunningTime="2026-02-28 03:48:05.330983646 +0000 UTC m=+739.995022955" Feb 28 03:48:05 crc kubenswrapper[4624]: I0228 03:48:05.350793 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" podStartSLOduration=3.040217733 podStartE2EDuration="6.350773382s" podCreationTimestamp="2026-02-28 03:47:59 +0000 UTC" firstStartedPulling="2026-02-28 03:48:01.119400364 +0000 UTC m=+735.783439673" lastFinishedPulling="2026-02-28 03:48:04.429956003 +0000 UTC m=+739.093995322" observedRunningTime="2026-02-28 03:48:05.348837838 +0000 UTC m=+740.012877147" watchObservedRunningTime="2026-02-28 03:48:05.350773382 +0000 UTC m=+740.014812701" Feb 28 03:48:06 crc kubenswrapper[4624]: I0228 03:48:06.097830 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9914aecc-5e57-40f7-886d-a4290bef8682" path="/var/lib/kubelet/pods/9914aecc-5e57-40f7-886d-a4290bef8682/volumes" Feb 28 03:48:08 crc kubenswrapper[4624]: I0228 03:48:08.341563 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" event={"ID":"66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a","Type":"ContainerStarted","Data":"9001ff1382427967dd885a5bed273086e6a14171299ee384c34e877cca2e3ab8"} Feb 28 03:48:10 crc kubenswrapper[4624]: I0228 03:48:10.387930 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-75frj" Feb 28 03:48:10 crc kubenswrapper[4624]: I0228 03:48:10.413364 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-pd9k9" podStartSLOduration=5.010085591 podStartE2EDuration="11.413336095s" podCreationTimestamp="2026-02-28 03:47:59 +0000 UTC" firstStartedPulling="2026-02-28 03:48:00.788960142 +0000 UTC m=+735.452999451" lastFinishedPulling="2026-02-28 03:48:07.192210646 +0000 UTC m=+741.856249955" observedRunningTime="2026-02-28 03:48:08.371307155 +0000 UTC m=+743.035346504" watchObservedRunningTime="2026-02-28 03:48:10.413336095 +0000 UTC m=+745.077375454" Feb 28 03:48:10 crc kubenswrapper[4624]: I0228 03:48:10.593425 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:10 crc kubenswrapper[4624]: I0228 03:48:10.594039 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:10 crc kubenswrapper[4624]: I0228 03:48:10.600719 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:11 crc kubenswrapper[4624]: I0228 03:48:11.404221 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7985c7579-7bt5l" Feb 28 03:48:11 crc kubenswrapper[4624]: I0228 03:48:11.483206 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ssl5n"] Feb 28 03:48:19 crc kubenswrapper[4624]: I0228 03:48:19.540397 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:48:19 crc kubenswrapper[4624]: I0228 03:48:19.540890 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:48:20 crc kubenswrapper[4624]: I0228 03:48:20.908039 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-ctxdh" Feb 28 03:48:34 crc kubenswrapper[4624]: I0228 03:48:34.977450 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n"] Feb 28 03:48:34 crc kubenswrapper[4624]: E0228 03:48:34.978654 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfa6f52-194d-484f-8e89-4dc2bafc8a34" containerName="oc" Feb 28 03:48:34 crc kubenswrapper[4624]: I0228 03:48:34.978675 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfa6f52-194d-484f-8e89-4dc2bafc8a34" containerName="oc" Feb 28 03:48:34 crc kubenswrapper[4624]: I0228 03:48:34.978819 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcfa6f52-194d-484f-8e89-4dc2bafc8a34" containerName="oc" Feb 28 03:48:34 crc kubenswrapper[4624]: I0228 03:48:34.979893 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:34 crc kubenswrapper[4624]: I0228 03:48:34.983282 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 03:48:34 crc kubenswrapper[4624]: I0228 03:48:34.992198 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n"] Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.151125 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.154299 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqshp\" (UniqueName: \"kubernetes.io/projected/6c955210-207a-4dc2-9be3-52ea5702de08-kube-api-access-cqshp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.154675 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.256181 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqshp\" (UniqueName: \"kubernetes.io/projected/6c955210-207a-4dc2-9be3-52ea5702de08-kube-api-access-cqshp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.256424 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.256517 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.257645 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.258359 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.297960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqshp\" (UniqueName: \"kubernetes.io/projected/6c955210-207a-4dc2-9be3-52ea5702de08-kube-api-access-cqshp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.364016 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:35 crc kubenswrapper[4624]: I0228 03:48:35.836484 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n"] Feb 28 03:48:36 crc kubenswrapper[4624]: I0228 03:48:36.589382 4624 generic.go:334] "Generic (PLEG): container finished" podID="6c955210-207a-4dc2-9be3-52ea5702de08" containerID="16862412818b84ce41bf7814f51a5d4898724a6c3a737cbd52af52e8b10a5544" exitCode=0 Feb 28 03:48:36 crc kubenswrapper[4624]: I0228 03:48:36.589541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" event={"ID":"6c955210-207a-4dc2-9be3-52ea5702de08","Type":"ContainerDied","Data":"16862412818b84ce41bf7814f51a5d4898724a6c3a737cbd52af52e8b10a5544"} Feb 28 03:48:36 crc kubenswrapper[4624]: I0228 03:48:36.589958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" event={"ID":"6c955210-207a-4dc2-9be3-52ea5702de08","Type":"ContainerStarted","Data":"620d1463ef4dccfd7629c2766e16cbb8c0f2ce0af074c28407ecb123709fa03a"} Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.321497 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rwkf2"] Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.324343 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.334156 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwkf2"] Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.388307 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ssl5n" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerName="console" containerID="cri-o://294076c871f077b10e77d68db58107450ad355235fb4841b712eb316390f12a0" gracePeriod=15 Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.485566 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtgv\" (UniqueName: \"kubernetes.io/projected/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-kube-api-access-zqtgv\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.485894 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-utilities\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.485937 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-catalog-content\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: E0228 03:48:37.530881 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63191dc2_3a46_435d_9e6d_158fe21737e1.slice/crio-294076c871f077b10e77d68db58107450ad355235fb4841b712eb316390f12a0.scope\": RecentStats: unable to find data in memory cache]" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.587680 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtgv\" (UniqueName: \"kubernetes.io/projected/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-kube-api-access-zqtgv\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.587739 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-utilities\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.587778 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-catalog-content\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.588357 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-catalog-content\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.588598 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-utilities\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.597532 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ssl5n_63191dc2-3a46-435d-9e6d-158fe21737e1/console/0.log" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.597575 4624 generic.go:334] "Generic (PLEG): container finished" podID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerID="294076c871f077b10e77d68db58107450ad355235fb4841b712eb316390f12a0" exitCode=2 Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.597613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ssl5n" event={"ID":"63191dc2-3a46-435d-9e6d-158fe21737e1","Type":"ContainerDied","Data":"294076c871f077b10e77d68db58107450ad355235fb4841b712eb316390f12a0"} Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.615356 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtgv\" (UniqueName: \"kubernetes.io/projected/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-kube-api-access-zqtgv\") pod \"redhat-operators-rwkf2\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:37 crc kubenswrapper[4624]: I0228 03:48:37.710929 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.017891 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwkf2"] Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.030380 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ssl5n_63191dc2-3a46-435d-9e6d-158fe21737e1/console/0.log" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.030477 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:48:38 crc kubenswrapper[4624]: W0228 03:48:38.031338 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eed19ed_ecb0_443f_9bd0_86e228d57dc5.slice/crio-c01171acafb265b446801fbec18856a3e79f6371fc05978fc04ed547e253cac6 WatchSource:0}: Error finding container c01171acafb265b446801fbec18856a3e79f6371fc05978fc04ed547e253cac6: Status 404 returned error can't find the container with id c01171acafb265b446801fbec18856a3e79f6371fc05978fc04ed547e253cac6 Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.205653 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-serving-cert\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.205973 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-console-config\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206133 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-trusted-ca-bundle\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206258 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9szkf\" (UniqueName: \"kubernetes.io/projected/63191dc2-3a46-435d-9e6d-158fe21737e1-kube-api-access-9szkf\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206334 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-oauth-config\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206439 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-oauth-serving-cert\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206537 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-service-ca\") pod \"63191dc2-3a46-435d-9e6d-158fe21737e1\" (UID: \"63191dc2-3a46-435d-9e6d-158fe21737e1\") " Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206670 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-console-config" (OuterVolumeSpecName: "console-config") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206678 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.206922 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.207245 4624 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.207312 4624 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.207459 4624 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.207237 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-service-ca" (OuterVolumeSpecName: "service-ca") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.214212 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.214674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.216057 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63191dc2-3a46-435d-9e6d-158fe21737e1-kube-api-access-9szkf" (OuterVolumeSpecName: "kube-api-access-9szkf") pod "63191dc2-3a46-435d-9e6d-158fe21737e1" (UID: "63191dc2-3a46-435d-9e6d-158fe21737e1"). InnerVolumeSpecName "kube-api-access-9szkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.309562 4624 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.309595 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9szkf\" (UniqueName: \"kubernetes.io/projected/63191dc2-3a46-435d-9e6d-158fe21737e1-kube-api-access-9szkf\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.309619 4624 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/63191dc2-3a46-435d-9e6d-158fe21737e1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.309628 4624 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/63191dc2-3a46-435d-9e6d-158fe21737e1-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.604375 4624 generic.go:334] "Generic (PLEG): container finished" podID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerID="1c48bec9b19b6f8837894a3d1113a596d758d3e570e48a884ae9b722826ddc0a" exitCode=0 Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.604433 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerDied","Data":"1c48bec9b19b6f8837894a3d1113a596d758d3e570e48a884ae9b722826ddc0a"} Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.604734 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerStarted","Data":"c01171acafb265b446801fbec18856a3e79f6371fc05978fc04ed547e253cac6"} Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.606891 4624 generic.go:334] "Generic (PLEG): container finished" podID="6c955210-207a-4dc2-9be3-52ea5702de08" containerID="3ccfc2fe44a6a5041db04e138c69e608a09003c222f1eec899e1494d8856f8b1" exitCode=0 Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.607027 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" event={"ID":"6c955210-207a-4dc2-9be3-52ea5702de08","Type":"ContainerDied","Data":"3ccfc2fe44a6a5041db04e138c69e608a09003c222f1eec899e1494d8856f8b1"} Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.610521 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ssl5n_63191dc2-3a46-435d-9e6d-158fe21737e1/console/0.log" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.610569 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ssl5n" event={"ID":"63191dc2-3a46-435d-9e6d-158fe21737e1","Type":"ContainerDied","Data":"3dc5873b3866e8973931e9419e6a3457e23b95d1931694c82efc70e9efa93d89"} Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.610603 4624 scope.go:117] "RemoveContainer" containerID="294076c871f077b10e77d68db58107450ad355235fb4841b712eb316390f12a0" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.610739 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ssl5n" Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.671914 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ssl5n"] Feb 28 03:48:38 crc kubenswrapper[4624]: I0228 03:48:38.677728 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ssl5n"] Feb 28 03:48:39 crc kubenswrapper[4624]: I0228 03:48:39.618025 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerStarted","Data":"2fa106c1c66737fdee29dad5eaa99442491963619549295f3ae52aa8f4c55ad9"} Feb 28 03:48:39 crc kubenswrapper[4624]: I0228 03:48:39.622023 4624 generic.go:334] "Generic (PLEG): container finished" podID="6c955210-207a-4dc2-9be3-52ea5702de08" containerID="f216ad3194e7084623ec3d4552361988048158a02a4848be4addd243c616eb6d" exitCode=0 Feb 28 03:48:39 crc kubenswrapper[4624]: I0228 03:48:39.622102 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" event={"ID":"6c955210-207a-4dc2-9be3-52ea5702de08","Type":"ContainerDied","Data":"f216ad3194e7084623ec3d4552361988048158a02a4848be4addd243c616eb6d"} Feb 28 03:48:40 crc kubenswrapper[4624]: I0228 03:48:40.115697 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" path="/var/lib/kubelet/pods/63191dc2-3a46-435d-9e6d-158fe21737e1/volumes" Feb 28 03:48:40 crc kubenswrapper[4624]: I0228 03:48:40.647210 4624 generic.go:334] "Generic (PLEG): container finished" podID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerID="2fa106c1c66737fdee29dad5eaa99442491963619549295f3ae52aa8f4c55ad9" exitCode=0 Feb 28 03:48:40 crc kubenswrapper[4624]: I0228 03:48:40.647411 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerDied","Data":"2fa106c1c66737fdee29dad5eaa99442491963619549295f3ae52aa8f4c55ad9"} Feb 28 03:48:40 crc kubenswrapper[4624]: I0228 03:48:40.958483 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.058037 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-util\") pod \"6c955210-207a-4dc2-9be3-52ea5702de08\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.058153 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqshp\" (UniqueName: \"kubernetes.io/projected/6c955210-207a-4dc2-9be3-52ea5702de08-kube-api-access-cqshp\") pod \"6c955210-207a-4dc2-9be3-52ea5702de08\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.058232 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-bundle\") pod \"6c955210-207a-4dc2-9be3-52ea5702de08\" (UID: \"6c955210-207a-4dc2-9be3-52ea5702de08\") " Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.059519 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-bundle" (OuterVolumeSpecName: "bundle") pod "6c955210-207a-4dc2-9be3-52ea5702de08" (UID: "6c955210-207a-4dc2-9be3-52ea5702de08"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.068500 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c955210-207a-4dc2-9be3-52ea5702de08-kube-api-access-cqshp" (OuterVolumeSpecName: "kube-api-access-cqshp") pod "6c955210-207a-4dc2-9be3-52ea5702de08" (UID: "6c955210-207a-4dc2-9be3-52ea5702de08"). InnerVolumeSpecName "kube-api-access-cqshp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.072041 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-util" (OuterVolumeSpecName: "util") pod "6c955210-207a-4dc2-9be3-52ea5702de08" (UID: "6c955210-207a-4dc2-9be3-52ea5702de08"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.160177 4624 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-util\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.160220 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqshp\" (UniqueName: \"kubernetes.io/projected/6c955210-207a-4dc2-9be3-52ea5702de08-kube-api-access-cqshp\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.160237 4624 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c955210-207a-4dc2-9be3-52ea5702de08-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.655498 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" event={"ID":"6c955210-207a-4dc2-9be3-52ea5702de08","Type":"ContainerDied","Data":"620d1463ef4dccfd7629c2766e16cbb8c0f2ce0af074c28407ecb123709fa03a"} Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.655542 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620d1463ef4dccfd7629c2766e16cbb8c0f2ce0af074c28407ecb123709fa03a" Feb 28 03:48:41 crc kubenswrapper[4624]: I0228 03:48:41.655584 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n" Feb 28 03:48:42 crc kubenswrapper[4624]: I0228 03:48:42.668365 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerStarted","Data":"666fb09b63531d274926416609d5db07304e9edacaa3ddf874f656ee19cd1cb9"} Feb 28 03:48:42 crc kubenswrapper[4624]: I0228 03:48:42.694370 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rwkf2" podStartSLOduration=2.6647333250000003 podStartE2EDuration="5.694350882s" podCreationTimestamp="2026-02-28 03:48:37 +0000 UTC" firstStartedPulling="2026-02-28 03:48:38.607684178 +0000 UTC m=+773.271723497" lastFinishedPulling="2026-02-28 03:48:41.637301745 +0000 UTC m=+776.301341054" observedRunningTime="2026-02-28 03:48:42.693278302 +0000 UTC m=+777.357317631" watchObservedRunningTime="2026-02-28 03:48:42.694350882 +0000 UTC m=+777.358390191" Feb 28 03:48:46 crc kubenswrapper[4624]: I0228 03:48:46.895824 4624 scope.go:117] "RemoveContainer" containerID="bc01818307416a9da39d01c76b0942060b043b316459d6b312c4083abe8ea234" Feb 28 03:48:47 crc kubenswrapper[4624]: I0228 03:48:47.711205 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:47 crc kubenswrapper[4624]: I0228 03:48:47.711255 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:48 crc kubenswrapper[4624]: I0228 03:48:48.769745 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rwkf2" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="registry-server" probeResult="failure" output=< Feb 28 03:48:48 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:48:48 crc kubenswrapper[4624]: > Feb 28 03:48:49 crc kubenswrapper[4624]: I0228 03:48:49.540381 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:48:49 crc kubenswrapper[4624]: I0228 03:48:49.540887 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318122 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl"] Feb 28 03:48:51 crc kubenswrapper[4624]: E0228 03:48:51.318355 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerName="console" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318367 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerName="console" Feb 28 03:48:51 crc kubenswrapper[4624]: E0228 03:48:51.318383 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="pull" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318389 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="pull" Feb 28 03:48:51 crc kubenswrapper[4624]: E0228 03:48:51.318404 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="util" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318410 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="util" Feb 28 03:48:51 crc kubenswrapper[4624]: E0228 03:48:51.318421 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="extract" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318426 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="extract" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318512 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="63191dc2-3a46-435d-9e6d-158fe21737e1" containerName="console" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318526 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c955210-207a-4dc2-9be3-52ea5702de08" containerName="extract" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.318953 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.321204 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-l7mn6" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.324144 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.324159 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.324361 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.324381 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.339481 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl"] Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.415135 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-webhook-cert\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.415511 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ksm\" (UniqueName: \"kubernetes.io/projected/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-kube-api-access-t7ksm\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.415543 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-apiservice-cert\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.516480 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-webhook-cert\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.516563 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7ksm\" (UniqueName: \"kubernetes.io/projected/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-kube-api-access-t7ksm\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.516594 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-apiservice-cert\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.522750 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-webhook-cert\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.522898 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-apiservice-cert\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.541348 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7ksm\" (UniqueName: \"kubernetes.io/projected/1ad85c59-61bb-4658-8e2a-cdd409e54b3d-kube-api-access-t7ksm\") pod \"metallb-operator-controller-manager-6647b8f4b6-mkvwl\" (UID: \"1ad85c59-61bb-4658-8e2a-cdd409e54b3d\") " pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.634285 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.894837 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl"] Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.943456 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr"] Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.944184 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.946865 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.947183 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.948360 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8m5gv" Feb 28 03:48:51 crc kubenswrapper[4624]: I0228 03:48:51.959876 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr"] Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.032176 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7be65953-83ce-403e-aac6-443ced5b772b-apiservice-cert\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.032278 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7be65953-83ce-403e-aac6-443ced5b772b-webhook-cert\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.032347 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvzjz\" (UniqueName: \"kubernetes.io/projected/7be65953-83ce-403e-aac6-443ced5b772b-kube-api-access-wvzjz\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.133487 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvzjz\" (UniqueName: \"kubernetes.io/projected/7be65953-83ce-403e-aac6-443ced5b772b-kube-api-access-wvzjz\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.133549 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7be65953-83ce-403e-aac6-443ced5b772b-apiservice-cert\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.133596 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7be65953-83ce-403e-aac6-443ced5b772b-webhook-cert\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.138523 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7be65953-83ce-403e-aac6-443ced5b772b-apiservice-cert\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.140751 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7be65953-83ce-403e-aac6-443ced5b772b-webhook-cert\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.162381 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvzjz\" (UniqueName: \"kubernetes.io/projected/7be65953-83ce-403e-aac6-443ced5b772b-kube-api-access-wvzjz\") pod \"metallb-operator-webhook-server-694dbf9577-jnbcr\" (UID: \"7be65953-83ce-403e-aac6-443ced5b772b\") " pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.273568 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" event={"ID":"1ad85c59-61bb-4658-8e2a-cdd409e54b3d","Type":"ContainerStarted","Data":"9eedecd1f9a3e8406e75c394e0b9a30a42c321da4a5469d6c378ba2ea0fc10c9"} Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.300468 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:52 crc kubenswrapper[4624]: I0228 03:48:52.604278 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr"] Feb 28 03:48:52 crc kubenswrapper[4624]: W0228 03:48:52.614435 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be65953_83ce_403e_aac6_443ced5b772b.slice/crio-7dd52a9328e03d4298c71b113f49641de1505e14320117e416ab64900219499e WatchSource:0}: Error finding container 7dd52a9328e03d4298c71b113f49641de1505e14320117e416ab64900219499e: Status 404 returned error can't find the container with id 7dd52a9328e03d4298c71b113f49641de1505e14320117e416ab64900219499e Feb 28 03:48:53 crc kubenswrapper[4624]: I0228 03:48:53.283609 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" event={"ID":"7be65953-83ce-403e-aac6-443ced5b772b","Type":"ContainerStarted","Data":"7dd52a9328e03d4298c71b113f49641de1505e14320117e416ab64900219499e"} Feb 28 03:48:56 crc kubenswrapper[4624]: I0228 03:48:56.315209 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" event={"ID":"1ad85c59-61bb-4658-8e2a-cdd409e54b3d","Type":"ContainerStarted","Data":"e84cd9427b2f20fb348217c30d5fbfb29b29bde87f20ff46f5d5461ea4b2a4a9"} Feb 28 03:48:56 crc kubenswrapper[4624]: I0228 03:48:56.315814 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:48:57 crc kubenswrapper[4624]: I0228 03:48:57.773536 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:57 crc kubenswrapper[4624]: I0228 03:48:57.805102 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" podStartSLOduration=2.965805641 podStartE2EDuration="6.80505735s" podCreationTimestamp="2026-02-28 03:48:51 +0000 UTC" firstStartedPulling="2026-02-28 03:48:51.924590377 +0000 UTC m=+786.588629676" lastFinishedPulling="2026-02-28 03:48:55.763842076 +0000 UTC m=+790.427881385" observedRunningTime="2026-02-28 03:48:56.342822304 +0000 UTC m=+791.006861633" watchObservedRunningTime="2026-02-28 03:48:57.80505735 +0000 UTC m=+792.469096669" Feb 28 03:48:57 crc kubenswrapper[4624]: I0228 03:48:57.831258 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:48:59 crc kubenswrapper[4624]: I0228 03:48:59.346483 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" event={"ID":"7be65953-83ce-403e-aac6-443ced5b772b","Type":"ContainerStarted","Data":"6203d3212c44c3e2b48eb473ed57331db9ca1562e0265f97bca136824d547a30"} Feb 28 03:48:59 crc kubenswrapper[4624]: I0228 03:48:59.346927 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:48:59 crc kubenswrapper[4624]: I0228 03:48:59.383886 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" podStartSLOduration=2.007884631 podStartE2EDuration="8.383852789s" podCreationTimestamp="2026-02-28 03:48:51 +0000 UTC" firstStartedPulling="2026-02-28 03:48:52.618062691 +0000 UTC m=+787.282102000" lastFinishedPulling="2026-02-28 03:48:58.994030849 +0000 UTC m=+793.658070158" observedRunningTime="2026-02-28 03:48:59.382669136 +0000 UTC m=+794.046708445" watchObservedRunningTime="2026-02-28 03:48:59.383852789 +0000 UTC m=+794.047892098" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.098670 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwkf2"] Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.098894 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rwkf2" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="registry-server" containerID="cri-o://666fb09b63531d274926416609d5db07304e9edacaa3ddf874f656ee19cd1cb9" gracePeriod=2 Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.357034 4624 generic.go:334] "Generic (PLEG): container finished" podID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerID="666fb09b63531d274926416609d5db07304e9edacaa3ddf874f656ee19cd1cb9" exitCode=0 Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.357166 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerDied","Data":"666fb09b63531d274926416609d5db07304e9edacaa3ddf874f656ee19cd1cb9"} Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.563689 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.682041 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-utilities\") pod \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.682199 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-catalog-content\") pod \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.682261 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqtgv\" (UniqueName: \"kubernetes.io/projected/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-kube-api-access-zqtgv\") pod \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\" (UID: \"6eed19ed-ecb0-443f-9bd0-86e228d57dc5\") " Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.683164 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-utilities" (OuterVolumeSpecName: "utilities") pod "6eed19ed-ecb0-443f-9bd0-86e228d57dc5" (UID: "6eed19ed-ecb0-443f-9bd0-86e228d57dc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.702249 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-kube-api-access-zqtgv" (OuterVolumeSpecName: "kube-api-access-zqtgv") pod "6eed19ed-ecb0-443f-9bd0-86e228d57dc5" (UID: "6eed19ed-ecb0-443f-9bd0-86e228d57dc5"). InnerVolumeSpecName "kube-api-access-zqtgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.783349 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqtgv\" (UniqueName: \"kubernetes.io/projected/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-kube-api-access-zqtgv\") on node \"crc\" DevicePath \"\"" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.783383 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.803420 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eed19ed-ecb0-443f-9bd0-86e228d57dc5" (UID: "6eed19ed-ecb0-443f-9bd0-86e228d57dc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:49:00 crc kubenswrapper[4624]: I0228 03:49:00.884565 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eed19ed-ecb0-443f-9bd0-86e228d57dc5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.365009 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwkf2" event={"ID":"6eed19ed-ecb0-443f-9bd0-86e228d57dc5","Type":"ContainerDied","Data":"c01171acafb265b446801fbec18856a3e79f6371fc05978fc04ed547e253cac6"} Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.365124 4624 scope.go:117] "RemoveContainer" containerID="666fb09b63531d274926416609d5db07304e9edacaa3ddf874f656ee19cd1cb9" Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.365169 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwkf2" Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.384345 4624 scope.go:117] "RemoveContainer" containerID="2fa106c1c66737fdee29dad5eaa99442491963619549295f3ae52aa8f4c55ad9" Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.403723 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwkf2"] Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.420612 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rwkf2"] Feb 28 03:49:01 crc kubenswrapper[4624]: I0228 03:49:01.449794 4624 scope.go:117] "RemoveContainer" containerID="1c48bec9b19b6f8837894a3d1113a596d758d3e570e48a884ae9b722826ddc0a" Feb 28 03:49:02 crc kubenswrapper[4624]: I0228 03:49:02.095000 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" path="/var/lib/kubelet/pods/6eed19ed-ecb0-443f-9bd0-86e228d57dc5/volumes" Feb 28 03:49:12 crc kubenswrapper[4624]: I0228 03:49:12.306940 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-694dbf9577-jnbcr" Feb 28 03:49:19 crc kubenswrapper[4624]: I0228 03:49:19.540001 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:49:19 crc kubenswrapper[4624]: I0228 03:49:19.540712 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:49:19 crc kubenswrapper[4624]: I0228 03:49:19.540773 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:49:19 crc kubenswrapper[4624]: I0228 03:49:19.541567 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d90ea216a2f4b67d549472e18b2176a4478f7a69481157402ae530c48f3b1213"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:49:19 crc kubenswrapper[4624]: I0228 03:49:19.541639 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://d90ea216a2f4b67d549472e18b2176a4478f7a69481157402ae530c48f3b1213" gracePeriod=600 Feb 28 03:49:20 crc kubenswrapper[4624]: I0228 03:49:20.489453 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="d90ea216a2f4b67d549472e18b2176a4478f7a69481157402ae530c48f3b1213" exitCode=0 Feb 28 03:49:20 crc kubenswrapper[4624]: I0228 03:49:20.489960 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"d90ea216a2f4b67d549472e18b2176a4478f7a69481157402ae530c48f3b1213"} Feb 28 03:49:20 crc kubenswrapper[4624]: I0228 03:49:20.490043 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"5ae4a4e8e6c778ba7c9f4e2d8ca7006770f5fd2af20468097f12f94d4858478d"} Feb 28 03:49:20 crc kubenswrapper[4624]: I0228 03:49:20.490134 4624 scope.go:117] "RemoveContainer" containerID="1dbd0294073b9fff5d98ecfa7b194254718a0891e28c241ce447d5bdb488d8e1" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.510697 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnnvc"] Feb 28 03:49:31 crc kubenswrapper[4624]: E0228 03:49:31.511988 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="registry-server" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.512012 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="registry-server" Feb 28 03:49:31 crc kubenswrapper[4624]: E0228 03:49:31.512035 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="extract-utilities" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.512049 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="extract-utilities" Feb 28 03:49:31 crc kubenswrapper[4624]: E0228 03:49:31.512107 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="extract-content" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.512122 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="extract-content" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.512314 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eed19ed-ecb0-443f-9bd0-86e228d57dc5" containerName="registry-server" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.516876 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.565308 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnnvc"] Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.635454 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-catalog-content\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.635503 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-utilities\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.635549 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bp88\" (UniqueName: \"kubernetes.io/projected/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-kube-api-access-4bp88\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.637352 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6647b8f4b6-mkvwl" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.736714 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bp88\" (UniqueName: \"kubernetes.io/projected/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-kube-api-access-4bp88\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.736833 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-catalog-content\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.736870 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-utilities\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.737890 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-utilities\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.738307 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-catalog-content\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.758503 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bp88\" (UniqueName: \"kubernetes.io/projected/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-kube-api-access-4bp88\") pod \"certified-operators-rnnvc\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:31 crc kubenswrapper[4624]: I0228 03:49:31.869984 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.120340 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnnvc"] Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.588528 4624 generic.go:334] "Generic (PLEG): container finished" podID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerID="17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c" exitCode=0 Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.588594 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerDied","Data":"17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c"} Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.588630 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerStarted","Data":"3437ea02dca2795bea80172b9bbae1587db495ccf1815bd12865c40f94618b17"} Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.590980 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.598151 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cpl69"] Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.600373 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.623960 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.623960 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-d6bj6" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.624341 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.651028 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w"] Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.651837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ds2w\" (UniqueName: \"kubernetes.io/projected/9c99c5b7-87a8-483e-8c66-2f5918d657c0-kube-api-access-7ds2w\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.651907 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-reloader\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.651933 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-conf\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.652051 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.652075 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics-certs\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.652133 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-startup\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.652166 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-sockets\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.653037 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.689425 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.693111 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w"] Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-reloader\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753498 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-conf\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753524 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753547 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics-certs\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753577 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-startup\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753619 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-sockets\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753653 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ds2w\" (UniqueName: \"kubernetes.io/projected/9c99c5b7-87a8-483e-8c66-2f5918d657c0-kube-api-access-7ds2w\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753684 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a14d415d-3a62-412a-8c98-13543a8bb573-cert\") pod \"frr-k8s-webhook-server-7f989f654f-8dm5w\" (UID: \"a14d415d-3a62-412a-8c98-13543a8bb573\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753705 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdcs\" (UniqueName: \"kubernetes.io/projected/a14d415d-3a62-412a-8c98-13543a8bb573-kube-api-access-bzdcs\") pod \"frr-k8s-webhook-server-7f989f654f-8dm5w\" (UID: \"a14d415d-3a62-412a-8c98-13543a8bb573\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: E0228 03:49:32.753765 4624 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 28 03:49:32 crc kubenswrapper[4624]: E0228 03:49:32.753860 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics-certs podName:9c99c5b7-87a8-483e-8c66-2f5918d657c0 nodeName:}" failed. No retries permitted until 2026-02-28 03:49:33.253838089 +0000 UTC m=+827.917877398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics-certs") pod "frr-k8s-cpl69" (UID: "9c99c5b7-87a8-483e-8c66-2f5918d657c0") : secret "frr-k8s-certs-secret" not found Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.753964 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-reloader\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.754067 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.754229 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-conf\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.754436 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-sockets\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.754899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9c99c5b7-87a8-483e-8c66-2f5918d657c0-frr-startup\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.796005 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ds2w\" (UniqueName: \"kubernetes.io/projected/9c99c5b7-87a8-483e-8c66-2f5918d657c0-kube-api-access-7ds2w\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.854868 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a14d415d-3a62-412a-8c98-13543a8bb573-cert\") pod \"frr-k8s-webhook-server-7f989f654f-8dm5w\" (UID: \"a14d415d-3a62-412a-8c98-13543a8bb573\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.854910 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdcs\" (UniqueName: \"kubernetes.io/projected/a14d415d-3a62-412a-8c98-13543a8bb573-kube-api-access-bzdcs\") pod \"frr-k8s-webhook-server-7f989f654f-8dm5w\" (UID: \"a14d415d-3a62-412a-8c98-13543a8bb573\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.857741 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a14d415d-3a62-412a-8c98-13543a8bb573-cert\") pod \"frr-k8s-webhook-server-7f989f654f-8dm5w\" (UID: \"a14d415d-3a62-412a-8c98-13543a8bb573\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.874069 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8qstw"] Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.874899 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8qstw" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.882279 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.882349 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.895001 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-j78kd" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.895168 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.901140 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdcs\" (UniqueName: \"kubernetes.io/projected/a14d415d-3a62-412a-8c98-13543a8bb573-kube-api-access-bzdcs\") pod \"frr-k8s-webhook-server-7f989f654f-8dm5w\" (UID: \"a14d415d-3a62-412a-8c98-13543a8bb573\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.944080 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-dj7kv"] Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.944938 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.949629 4624 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.957350 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.957854 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kth7\" (UniqueName: \"kubernetes.io/projected/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-kube-api-access-7kth7\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.958063 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metallb-excludel2\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.958291 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.966952 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:32 crc kubenswrapper[4624]: I0228 03:49:32.977130 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-dj7kv"] Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.059878 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metallb-excludel2\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.059951 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.059992 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0633733e-c39d-4767-883b-e1b16be08190-metrics-certs\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.060010 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.060054 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0633733e-c39d-4767-883b-e1b16be08190-cert\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.060074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kth7\" (UniqueName: \"kubernetes.io/projected/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-kube-api-access-7kth7\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.060114 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c6j9\" (UniqueName: \"kubernetes.io/projected/0633733e-c39d-4767-883b-e1b16be08190-kube-api-access-6c6j9\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.060639 4624 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.060731 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist podName:ebe5dd19-2d46-4a69-9847-bc91d0cd4423 nodeName:}" failed. No retries permitted until 2026-02-28 03:49:33.560712751 +0000 UTC m=+828.224752060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist") pod "speaker-8qstw" (UID: "ebe5dd19-2d46-4a69-9847-bc91d0cd4423") : secret "metallb-memberlist" not found Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.060785 4624 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.060791 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metallb-excludel2\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.060807 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs podName:ebe5dd19-2d46-4a69-9847-bc91d0cd4423 nodeName:}" failed. No retries permitted until 2026-02-28 03:49:33.560801183 +0000 UTC m=+828.224840492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs") pod "speaker-8qstw" (UID: "ebe5dd19-2d46-4a69-9847-bc91d0cd4423") : secret "speaker-certs-secret" not found Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.114802 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kth7\" (UniqueName: \"kubernetes.io/projected/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-kube-api-access-7kth7\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.161130 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0633733e-c39d-4767-883b-e1b16be08190-metrics-certs\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.161266 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0633733e-c39d-4767-883b-e1b16be08190-cert\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.161293 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c6j9\" (UniqueName: \"kubernetes.io/projected/0633733e-c39d-4767-883b-e1b16be08190-kube-api-access-6c6j9\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.168657 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0633733e-c39d-4767-883b-e1b16be08190-metrics-certs\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.169440 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0633733e-c39d-4767-883b-e1b16be08190-cert\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.217339 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c6j9\" (UniqueName: \"kubernetes.io/projected/0633733e-c39d-4767-883b-e1b16be08190-kube-api-access-6c6j9\") pod \"controller-86ddb6bd46-dj7kv\" (UID: \"0633733e-c39d-4767-883b-e1b16be08190\") " pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.259599 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.262908 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics-certs\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.282885 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c99c5b7-87a8-483e-8c66-2f5918d657c0-metrics-certs\") pod \"frr-k8s-cpl69\" (UID: \"9c99c5b7-87a8-483e-8c66-2f5918d657c0\") " pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.447039 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w"] Feb 28 03:49:33 crc kubenswrapper[4624]: W0228 03:49:33.478261 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda14d415d_3a62_412a_8c98_13543a8bb573.slice/crio-76a1ab683a6172ad1f2dba18f8b278e4548cd2f1c9107d7f27e4f034270e6127 WatchSource:0}: Error finding container 76a1ab683a6172ad1f2dba18f8b278e4548cd2f1c9107d7f27e4f034270e6127: Status 404 returned error can't find the container with id 76a1ab683a6172ad1f2dba18f8b278e4548cd2f1c9107d7f27e4f034270e6127 Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.517494 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.567357 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.567984 4624 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.568189 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs podName:ebe5dd19-2d46-4a69-9847-bc91d0cd4423 nodeName:}" failed. No retries permitted until 2026-02-28 03:49:34.568154482 +0000 UTC m=+829.232193791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs") pod "speaker-8qstw" (UID: "ebe5dd19-2d46-4a69-9847-bc91d0cd4423") : secret "speaker-certs-secret" not found Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.568405 4624 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 03:49:33 crc kubenswrapper[4624]: E0228 03:49:33.568511 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist podName:ebe5dd19-2d46-4a69-9847-bc91d0cd4423 nodeName:}" failed. No retries permitted until 2026-02-28 03:49:34.568482851 +0000 UTC m=+829.232522350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist") pod "speaker-8qstw" (UID: "ebe5dd19-2d46-4a69-9847-bc91d0cd4423") : secret "metallb-memberlist" not found Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.568068 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.571423 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-dj7kv"] Feb 28 03:49:33 crc kubenswrapper[4624]: W0228 03:49:33.586359 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0633733e_c39d_4767_883b_e1b16be08190.slice/crio-c8a7a7f1f286dc7f296d4efbe00321dc7d9681a35dd38400d925b853ace75474 WatchSource:0}: Error finding container c8a7a7f1f286dc7f296d4efbe00321dc7d9681a35dd38400d925b853ace75474: Status 404 returned error can't find the container with id c8a7a7f1f286dc7f296d4efbe00321dc7d9681a35dd38400d925b853ace75474 Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.629066 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-dj7kv" event={"ID":"0633733e-c39d-4767-883b-e1b16be08190","Type":"ContainerStarted","Data":"c8a7a7f1f286dc7f296d4efbe00321dc7d9681a35dd38400d925b853ace75474"} Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.630729 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerStarted","Data":"3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa"} Feb 28 03:49:33 crc kubenswrapper[4624]: I0228 03:49:33.638474 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" event={"ID":"a14d415d-3a62-412a-8c98-13543a8bb573","Type":"ContainerStarted","Data":"76a1ab683a6172ad1f2dba18f8b278e4548cd2f1c9107d7f27e4f034270e6127"} Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.589300 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.590454 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.597884 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-metrics-certs\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.598247 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ebe5dd19-2d46-4a69-9847-bc91d0cd4423-memberlist\") pod \"speaker-8qstw\" (UID: \"ebe5dd19-2d46-4a69-9847-bc91d0cd4423\") " pod="metallb-system/speaker-8qstw" Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.652465 4624 generic.go:334] "Generic (PLEG): container finished" podID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerID="3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa" exitCode=0 Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.652531 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerDied","Data":"3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa"} Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.656547 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"76057b5be3d835e771bb72abfcef59ddd4db7a58466e4d1f87d01ea695c1ae7b"} Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.662391 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-dj7kv" event={"ID":"0633733e-c39d-4767-883b-e1b16be08190","Type":"ContainerStarted","Data":"daae4d1f701c2c2dd990db7a3f90481073f27bec81d020c67161c87086c378ef"} Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.662457 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-dj7kv" event={"ID":"0633733e-c39d-4767-883b-e1b16be08190","Type":"ContainerStarted","Data":"67c61f46fc8c6cae8ae2147701ce229ceecd6fad47a5de876dd2bfe653546188"} Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.662550 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.691658 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8qstw" Feb 28 03:49:34 crc kubenswrapper[4624]: I0228 03:49:34.697733 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-dj7kv" podStartSLOduration=2.697703664 podStartE2EDuration="2.697703664s" podCreationTimestamp="2026-02-28 03:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:49:34.691664999 +0000 UTC m=+829.355704308" watchObservedRunningTime="2026-02-28 03:49:34.697703664 +0000 UTC m=+829.361742973" Feb 28 03:49:34 crc kubenswrapper[4624]: W0228 03:49:34.723042 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebe5dd19_2d46_4a69_9847_bc91d0cd4423.slice/crio-374836b0418c04b9d69f216d5f9a097696b7c2e300313239f853f7df3f4f8ef8 WatchSource:0}: Error finding container 374836b0418c04b9d69f216d5f9a097696b7c2e300313239f853f7df3f4f8ef8: Status 404 returned error can't find the container with id 374836b0418c04b9d69f216d5f9a097696b7c2e300313239f853f7df3f4f8ef8 Feb 28 03:49:35 crc kubenswrapper[4624]: I0228 03:49:35.680325 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerStarted","Data":"17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04"} Feb 28 03:49:35 crc kubenswrapper[4624]: I0228 03:49:35.690829 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8qstw" event={"ID":"ebe5dd19-2d46-4a69-9847-bc91d0cd4423","Type":"ContainerStarted","Data":"73ca4cd4d77d59f3c1783bf5ee32eef89d6455ef30d575d0db66e35ff1b2d43c"} Feb 28 03:49:35 crc kubenswrapper[4624]: I0228 03:49:35.690898 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8qstw" event={"ID":"ebe5dd19-2d46-4a69-9847-bc91d0cd4423","Type":"ContainerStarted","Data":"374836b0418c04b9d69f216d5f9a097696b7c2e300313239f853f7df3f4f8ef8"} Feb 28 03:49:35 crc kubenswrapper[4624]: I0228 03:49:35.717677 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnnvc" podStartSLOduration=2.192140659 podStartE2EDuration="4.717654523s" podCreationTimestamp="2026-02-28 03:49:31 +0000 UTC" firstStartedPulling="2026-02-28 03:49:32.590688749 +0000 UTC m=+827.254728058" lastFinishedPulling="2026-02-28 03:49:35.116202613 +0000 UTC m=+829.780241922" observedRunningTime="2026-02-28 03:49:35.713830609 +0000 UTC m=+830.377869918" watchObservedRunningTime="2026-02-28 03:49:35.717654523 +0000 UTC m=+830.381693832" Feb 28 03:49:36 crc kubenswrapper[4624]: I0228 03:49:36.704512 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8qstw" event={"ID":"ebe5dd19-2d46-4a69-9847-bc91d0cd4423","Type":"ContainerStarted","Data":"f40416c4366a0cfac8ed2d9cda3250c2b9c57fcc7f8d455aa92a50cb74f511ca"} Feb 28 03:49:36 crc kubenswrapper[4624]: I0228 03:49:36.704837 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8qstw" Feb 28 03:49:36 crc kubenswrapper[4624]: I0228 03:49:36.729346 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8qstw" podStartSLOduration=4.729329456 podStartE2EDuration="4.729329456s" podCreationTimestamp="2026-02-28 03:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:49:36.724630949 +0000 UTC m=+831.388670258" watchObservedRunningTime="2026-02-28 03:49:36.729329456 +0000 UTC m=+831.393368765" Feb 28 03:49:41 crc kubenswrapper[4624]: I0228 03:49:41.878512 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:41 crc kubenswrapper[4624]: I0228 03:49:41.879737 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:41 crc kubenswrapper[4624]: I0228 03:49:41.946734 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:42 crc kubenswrapper[4624]: I0228 03:49:42.801331 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:42 crc kubenswrapper[4624]: I0228 03:49:42.853898 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnnvc"] Feb 28 03:49:43 crc kubenswrapper[4624]: I0228 03:49:43.269246 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-dj7kv" Feb 28 03:49:43 crc kubenswrapper[4624]: I0228 03:49:43.762227 4624 generic.go:334] "Generic (PLEG): container finished" podID="9c99c5b7-87a8-483e-8c66-2f5918d657c0" containerID="0ce2b60be4b65f004cbd86d18e0b5e07899fdf56ac2dca0f56b355198debb4cc" exitCode=0 Feb 28 03:49:43 crc kubenswrapper[4624]: I0228 03:49:43.762347 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerDied","Data":"0ce2b60be4b65f004cbd86d18e0b5e07899fdf56ac2dca0f56b355198debb4cc"} Feb 28 03:49:43 crc kubenswrapper[4624]: I0228 03:49:43.767008 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" event={"ID":"a14d415d-3a62-412a-8c98-13543a8bb573","Type":"ContainerStarted","Data":"ef04818ddd20d34fd4c4e2e987ad389edae00251690693e29a8b62c53ac9d476"} Feb 28 03:49:43 crc kubenswrapper[4624]: I0228 03:49:43.767209 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:49:43 crc kubenswrapper[4624]: I0228 03:49:43.816280 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" podStartSLOduration=2.352595242 podStartE2EDuration="11.816258105s" podCreationTimestamp="2026-02-28 03:49:32 +0000 UTC" firstStartedPulling="2026-02-28 03:49:33.481687139 +0000 UTC m=+828.145726448" lastFinishedPulling="2026-02-28 03:49:42.945350002 +0000 UTC m=+837.609389311" observedRunningTime="2026-02-28 03:49:43.815206026 +0000 UTC m=+838.479245335" watchObservedRunningTime="2026-02-28 03:49:43.816258105 +0000 UTC m=+838.480297424" Feb 28 03:49:44 crc kubenswrapper[4624]: I0228 03:49:44.777530 4624 generic.go:334] "Generic (PLEG): container finished" podID="9c99c5b7-87a8-483e-8c66-2f5918d657c0" containerID="c72e6259bc4c72ef847769f70fe73739554abf9045a91f51212b506ffac98796" exitCode=0 Feb 28 03:49:44 crc kubenswrapper[4624]: I0228 03:49:44.777911 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnnvc" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="registry-server" containerID="cri-o://17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04" gracePeriod=2 Feb 28 03:49:44 crc kubenswrapper[4624]: I0228 03:49:44.779661 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerDied","Data":"c72e6259bc4c72ef847769f70fe73739554abf9045a91f51212b506ffac98796"} Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.190831 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.213486 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bp88\" (UniqueName: \"kubernetes.io/projected/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-kube-api-access-4bp88\") pod \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.213590 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-catalog-content\") pod \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.213641 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-utilities\") pod \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\" (UID: \"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf\") " Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.219073 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-utilities" (OuterVolumeSpecName: "utilities") pod "88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" (UID: "88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.226685 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-kube-api-access-4bp88" (OuterVolumeSpecName: "kube-api-access-4bp88") pod "88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" (UID: "88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf"). InnerVolumeSpecName "kube-api-access-4bp88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.280868 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" (UID: "88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.316888 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bp88\" (UniqueName: \"kubernetes.io/projected/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-kube-api-access-4bp88\") on node \"crc\" DevicePath \"\"" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.316923 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.316934 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.785738 4624 generic.go:334] "Generic (PLEG): container finished" podID="9c99c5b7-87a8-483e-8c66-2f5918d657c0" containerID="4adbd9723008f77699cc2ce818bf2d08c199b387b5482ad482c2db300e057eeb" exitCode=0 Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.785809 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerDied","Data":"4adbd9723008f77699cc2ce818bf2d08c199b387b5482ad482c2db300e057eeb"} Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.794254 4624 generic.go:334] "Generic (PLEG): container finished" podID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerID="17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04" exitCode=0 Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.794292 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerDied","Data":"17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04"} Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.794317 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnnvc" event={"ID":"88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf","Type":"ContainerDied","Data":"3437ea02dca2795bea80172b9bbae1587db495ccf1815bd12865c40f94618b17"} Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.794336 4624 scope.go:117] "RemoveContainer" containerID="17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.794447 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnnvc" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.822491 4624 scope.go:117] "RemoveContainer" containerID="3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.873131 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnnvc"] Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.886738 4624 scope.go:117] "RemoveContainer" containerID="17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.893699 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnnvc"] Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.938906 4624 scope.go:117] "RemoveContainer" containerID="17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04" Feb 28 03:49:45 crc kubenswrapper[4624]: E0228 03:49:45.940907 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04\": container with ID starting with 17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04 not found: ID does not exist" containerID="17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.940948 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04"} err="failed to get container status \"17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04\": rpc error: code = NotFound desc = could not find container \"17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04\": container with ID starting with 17f25504c76f6ae0d22d674214ca962ee082da135fa4f27c209f5ee6c8faac04 not found: ID does not exist" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.940974 4624 scope.go:117] "RemoveContainer" containerID="3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa" Feb 28 03:49:45 crc kubenswrapper[4624]: E0228 03:49:45.941596 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa\": container with ID starting with 3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa not found: ID does not exist" containerID="3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.941627 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa"} err="failed to get container status \"3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa\": rpc error: code = NotFound desc = could not find container \"3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa\": container with ID starting with 3e73eba2f74849028ec2be88acc377ca70605d123e5b16bfd6b266866e1871aa not found: ID does not exist" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.941648 4624 scope.go:117] "RemoveContainer" containerID="17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c" Feb 28 03:49:45 crc kubenswrapper[4624]: E0228 03:49:45.942550 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c\": container with ID starting with 17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c not found: ID does not exist" containerID="17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c" Feb 28 03:49:45 crc kubenswrapper[4624]: I0228 03:49:45.942599 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c"} err="failed to get container status \"17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c\": rpc error: code = NotFound desc = could not find container \"17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c\": container with ID starting with 17f6f2852bd0c9a587a67837f165892c6d67e512d13e1c59c3533015ff24d45c not found: ID does not exist" Feb 28 03:49:46 crc kubenswrapper[4624]: I0228 03:49:46.096236 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" path="/var/lib/kubelet/pods/88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf/volumes" Feb 28 03:49:46 crc kubenswrapper[4624]: I0228 03:49:46.807886 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"74cb8fd76a795d2ae86e91e6f0bb215414545dbb22abba99782bddaebb4d44ae"} Feb 28 03:49:46 crc kubenswrapper[4624]: I0228 03:49:46.808408 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"e67a4dd8de1b78e53ded198235a332a5dd98c90a31e1fb9da0b9f107069b2005"} Feb 28 03:49:46 crc kubenswrapper[4624]: I0228 03:49:46.808425 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"2f940236061a41a40636d795008d1fbfc81a5c703362865510319fb909d3dd3d"} Feb 28 03:49:46 crc kubenswrapper[4624]: I0228 03:49:46.808437 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"630cf789e9f892bad0a69fb224c0c7d5e6426fafb761d0cad7c50bd5654e5257"} Feb 28 03:49:46 crc kubenswrapper[4624]: I0228 03:49:46.808447 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"3da7005483db010d88491c8e37b7ce6c2a46b70f0c5b5d26775932c89cf3ee87"} Feb 28 03:49:47 crc kubenswrapper[4624]: I0228 03:49:47.824360 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpl69" event={"ID":"9c99c5b7-87a8-483e-8c66-2f5918d657c0","Type":"ContainerStarted","Data":"5b121a48abebd454d31ea5f7f73c083e77d85bbeefcb1015efe57955651bdc3f"} Feb 28 03:49:47 crc kubenswrapper[4624]: I0228 03:49:47.825854 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:47 crc kubenswrapper[4624]: I0228 03:49:47.883069 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cpl69" podStartSLOduration=6.606452946 podStartE2EDuration="15.883047377s" podCreationTimestamp="2026-02-28 03:49:32 +0000 UTC" firstStartedPulling="2026-02-28 03:49:33.689857255 +0000 UTC m=+828.353896564" lastFinishedPulling="2026-02-28 03:49:42.966451686 +0000 UTC m=+837.630490995" observedRunningTime="2026-02-28 03:49:47.878864203 +0000 UTC m=+842.542903512" watchObservedRunningTime="2026-02-28 03:49:47.883047377 +0000 UTC m=+842.547086696" Feb 28 03:49:48 crc kubenswrapper[4624]: I0228 03:49:48.518582 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:48 crc kubenswrapper[4624]: I0228 03:49:48.603719 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cpl69" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.764552 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6v4dt"] Feb 28 03:49:51 crc kubenswrapper[4624]: E0228 03:49:51.765089 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="extract-utilities" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.765122 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="extract-utilities" Feb 28 03:49:51 crc kubenswrapper[4624]: E0228 03:49:51.765136 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="registry-server" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.765145 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="registry-server" Feb 28 03:49:51 crc kubenswrapper[4624]: E0228 03:49:51.765166 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="extract-content" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.765175 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="extract-content" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.765309 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e75fb7-5d01-45b0-9faa-84a0c0d8c2bf" containerName="registry-server" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.768940 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.785911 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6v4dt"] Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.819988 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-utilities\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.820083 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-catalog-content\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.820142 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5sx\" (UniqueName: \"kubernetes.io/projected/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-kube-api-access-cb5sx\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.922285 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-utilities\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.922791 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-catalog-content\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.922743 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-utilities\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.922869 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5sx\" (UniqueName: \"kubernetes.io/projected/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-kube-api-access-cb5sx\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.923130 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-catalog-content\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:51 crc kubenswrapper[4624]: I0228 03:49:51.951948 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5sx\" (UniqueName: \"kubernetes.io/projected/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-kube-api-access-cb5sx\") pod \"community-operators-6v4dt\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:52 crc kubenswrapper[4624]: I0228 03:49:52.094446 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:49:52 crc kubenswrapper[4624]: I0228 03:49:52.643331 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6v4dt"] Feb 28 03:49:52 crc kubenswrapper[4624]: W0228 03:49:52.649903 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf85ee41_9f19_4bea_91c4_d58b9cb055b4.slice/crio-145e3d15acd29764bd27f78a50af8a83767814f28a7d6427ef4c642965f71d28 WatchSource:0}: Error finding container 145e3d15acd29764bd27f78a50af8a83767814f28a7d6427ef4c642965f71d28: Status 404 returned error can't find the container with id 145e3d15acd29764bd27f78a50af8a83767814f28a7d6427ef4c642965f71d28 Feb 28 03:49:52 crc kubenswrapper[4624]: I0228 03:49:52.869835 4624 generic.go:334] "Generic (PLEG): container finished" podID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerID="33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3" exitCode=0 Feb 28 03:49:52 crc kubenswrapper[4624]: I0228 03:49:52.870108 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v4dt" event={"ID":"bf85ee41-9f19-4bea-91c4-d58b9cb055b4","Type":"ContainerDied","Data":"33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3"} Feb 28 03:49:52 crc kubenswrapper[4624]: I0228 03:49:52.870440 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v4dt" event={"ID":"bf85ee41-9f19-4bea-91c4-d58b9cb055b4","Type":"ContainerStarted","Data":"145e3d15acd29764bd27f78a50af8a83767814f28a7d6427ef4c642965f71d28"} Feb 28 03:49:54 crc kubenswrapper[4624]: I0228 03:49:54.701868 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8qstw" Feb 28 03:49:54 crc kubenswrapper[4624]: I0228 03:49:54.890802 4624 generic.go:334] "Generic (PLEG): container finished" podID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerID="1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016" exitCode=0 Feb 28 03:49:54 crc kubenswrapper[4624]: I0228 03:49:54.890848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v4dt" event={"ID":"bf85ee41-9f19-4bea-91c4-d58b9cb055b4","Type":"ContainerDied","Data":"1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016"} Feb 28 03:49:55 crc kubenswrapper[4624]: I0228 03:49:55.899353 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v4dt" event={"ID":"bf85ee41-9f19-4bea-91c4-d58b9cb055b4","Type":"ContainerStarted","Data":"f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd"} Feb 28 03:49:55 crc kubenswrapper[4624]: I0228 03:49:55.918498 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6v4dt" podStartSLOduration=2.5263767980000003 podStartE2EDuration="4.918478047s" podCreationTimestamp="2026-02-28 03:49:51 +0000 UTC" firstStartedPulling="2026-02-28 03:49:52.871686437 +0000 UTC m=+847.535725756" lastFinishedPulling="2026-02-28 03:49:55.263787656 +0000 UTC m=+849.927827005" observedRunningTime="2026-02-28 03:49:55.915913457 +0000 UTC m=+850.579952786" watchObservedRunningTime="2026-02-28 03:49:55.918478047 +0000 UTC m=+850.582517396" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.603849 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b5cxk"] Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.605181 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.607553 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qtg7r" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.608025 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.608509 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.622898 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b5cxk"] Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.722173 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7qk\" (UniqueName: \"kubernetes.io/projected/877611c7-6ae9-41ac-aa32-28d0f42c0e14-kube-api-access-jw7qk\") pod \"openstack-operator-index-b5cxk\" (UID: \"877611c7-6ae9-41ac-aa32-28d0f42c0e14\") " pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.823385 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw7qk\" (UniqueName: \"kubernetes.io/projected/877611c7-6ae9-41ac-aa32-28d0f42c0e14-kube-api-access-jw7qk\") pod \"openstack-operator-index-b5cxk\" (UID: \"877611c7-6ae9-41ac-aa32-28d0f42c0e14\") " pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.847359 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw7qk\" (UniqueName: \"kubernetes.io/projected/877611c7-6ae9-41ac-aa32-28d0f42c0e14-kube-api-access-jw7qk\") pod \"openstack-operator-index-b5cxk\" (UID: \"877611c7-6ae9-41ac-aa32-28d0f42c0e14\") " pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:49:57 crc kubenswrapper[4624]: I0228 03:49:57.924900 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:49:58 crc kubenswrapper[4624]: I0228 03:49:58.398480 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b5cxk"] Feb 28 03:49:58 crc kubenswrapper[4624]: W0228 03:49:58.399069 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877611c7_6ae9_41ac_aa32_28d0f42c0e14.slice/crio-0de4ae07af9bc9db2bcab035323479c31d4b7fe773534405b553119c91377e14 WatchSource:0}: Error finding container 0de4ae07af9bc9db2bcab035323479c31d4b7fe773534405b553119c91377e14: Status 404 returned error can't find the container with id 0de4ae07af9bc9db2bcab035323479c31d4b7fe773534405b553119c91377e14 Feb 28 03:49:58 crc kubenswrapper[4624]: I0228 03:49:58.930277 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5cxk" event={"ID":"877611c7-6ae9-41ac-aa32-28d0f42c0e14","Type":"ContainerStarted","Data":"0de4ae07af9bc9db2bcab035323479c31d4b7fe773534405b553119c91377e14"} Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.138615 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537510-46lp8"] Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.140150 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.143774 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.144231 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.148362 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.159211 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537510-46lp8"] Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.260698 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwkfq\" (UniqueName: \"kubernetes.io/projected/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1-kube-api-access-pwkfq\") pod \"auto-csr-approver-29537510-46lp8\" (UID: \"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1\") " pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.361956 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwkfq\" (UniqueName: \"kubernetes.io/projected/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1-kube-api-access-pwkfq\") pod \"auto-csr-approver-29537510-46lp8\" (UID: \"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1\") " pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.392750 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwkfq\" (UniqueName: \"kubernetes.io/projected/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1-kube-api-access-pwkfq\") pod \"auto-csr-approver-29537510-46lp8\" (UID: \"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1\") " pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:00 crc kubenswrapper[4624]: I0228 03:50:00.476143 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:01 crc kubenswrapper[4624]: I0228 03:50:01.350269 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537510-46lp8"] Feb 28 03:50:01 crc kubenswrapper[4624]: I0228 03:50:01.738874 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b5cxk"] Feb 28 03:50:01 crc kubenswrapper[4624]: I0228 03:50:01.955772 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5cxk" event={"ID":"877611c7-6ae9-41ac-aa32-28d0f42c0e14","Type":"ContainerStarted","Data":"79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd"} Feb 28 03:50:01 crc kubenswrapper[4624]: I0228 03:50:01.957618 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537510-46lp8" event={"ID":"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1","Type":"ContainerStarted","Data":"0920aff1babf8e749e66264f7deae5694ade982b3a71d9bd9e889b12d288e1b7"} Feb 28 03:50:01 crc kubenswrapper[4624]: I0228 03:50:01.981121 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b5cxk" podStartSLOduration=2.288694838 podStartE2EDuration="4.981075513s" podCreationTimestamp="2026-02-28 03:49:57 +0000 UTC" firstStartedPulling="2026-02-28 03:49:58.402265338 +0000 UTC m=+853.066304657" lastFinishedPulling="2026-02-28 03:50:01.094646023 +0000 UTC m=+855.758685332" observedRunningTime="2026-02-28 03:50:01.97654641 +0000 UTC m=+856.640585749" watchObservedRunningTime="2026-02-28 03:50:01.981075513 +0000 UTC m=+856.645114852" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.098190 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.098236 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.150871 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.540719 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6r9np"] Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.543386 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.563024 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6r9np"] Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.597638 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgcm\" (UniqueName: \"kubernetes.io/projected/5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be-kube-api-access-qdgcm\") pod \"openstack-operator-index-6r9np\" (UID: \"5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be\") " pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.700245 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgcm\" (UniqueName: \"kubernetes.io/projected/5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be-kube-api-access-qdgcm\") pod \"openstack-operator-index-6r9np\" (UID: \"5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be\") " pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.740628 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgcm\" (UniqueName: \"kubernetes.io/projected/5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be-kube-api-access-qdgcm\") pod \"openstack-operator-index-6r9np\" (UID: \"5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be\") " pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.873667 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.973645 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-b5cxk" podUID="877611c7-6ae9-41ac-aa32-28d0f42c0e14" containerName="registry-server" containerID="cri-o://79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd" gracePeriod=2 Feb 28 03:50:02 crc kubenswrapper[4624]: I0228 03:50:02.996323 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-8dm5w" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.145150 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.383415 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.420818 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw7qk\" (UniqueName: \"kubernetes.io/projected/877611c7-6ae9-41ac-aa32-28d0f42c0e14-kube-api-access-jw7qk\") pod \"877611c7-6ae9-41ac-aa32-28d0f42c0e14\" (UID: \"877611c7-6ae9-41ac-aa32-28d0f42c0e14\") " Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.422333 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6r9np"] Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.425746 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877611c7-6ae9-41ac-aa32-28d0f42c0e14-kube-api-access-jw7qk" (OuterVolumeSpecName: "kube-api-access-jw7qk") pod "877611c7-6ae9-41ac-aa32-28d0f42c0e14" (UID: "877611c7-6ae9-41ac-aa32-28d0f42c0e14"). InnerVolumeSpecName "kube-api-access-jw7qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:50:03 crc kubenswrapper[4624]: W0228 03:50:03.428062 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db2fa57_c7df_4bb4_ba80_5aa1a8ee08be.slice/crio-11dc42e5df3eea1abc340ab996fb86cf4a093ad20fcffafed381db06d9496c4f WatchSource:0}: Error finding container 11dc42e5df3eea1abc340ab996fb86cf4a093ad20fcffafed381db06d9496c4f: Status 404 returned error can't find the container with id 11dc42e5df3eea1abc340ab996fb86cf4a093ad20fcffafed381db06d9496c4f Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.523405 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw7qk\" (UniqueName: \"kubernetes.io/projected/877611c7-6ae9-41ac-aa32-28d0f42c0e14-kube-api-access-jw7qk\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.524637 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cpl69" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.982236 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6r9np" event={"ID":"5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be","Type":"ContainerStarted","Data":"264684bf35ef3e1848e460adb23a4597e3cba7887babd9fa7ed890ca19cef5fd"} Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.983961 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6r9np" event={"ID":"5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be","Type":"ContainerStarted","Data":"11dc42e5df3eea1abc340ab996fb86cf4a093ad20fcffafed381db06d9496c4f"} Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.984725 4624 generic.go:334] "Generic (PLEG): container finished" podID="877611c7-6ae9-41ac-aa32-28d0f42c0e14" containerID="79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd" exitCode=0 Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.984804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5cxk" event={"ID":"877611c7-6ae9-41ac-aa32-28d0f42c0e14","Type":"ContainerDied","Data":"79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd"} Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.984826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5cxk" event={"ID":"877611c7-6ae9-41ac-aa32-28d0f42c0e14","Type":"ContainerDied","Data":"0de4ae07af9bc9db2bcab035323479c31d4b7fe773534405b553119c91377e14"} Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.984847 4624 scope.go:117] "RemoveContainer" containerID="79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.985124 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5cxk" Feb 28 03:50:03 crc kubenswrapper[4624]: I0228 03:50:03.992606 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537510-46lp8" event={"ID":"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1","Type":"ContainerStarted","Data":"2305eda414191aa7796a7e7928f155097172c0d31b19b4fdd79d7f26e2e3bf41"} Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.002785 4624 scope.go:117] "RemoveContainer" containerID="79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd" Feb 28 03:50:04 crc kubenswrapper[4624]: E0228 03:50:04.003636 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd\": container with ID starting with 79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd not found: ID does not exist" containerID="79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd" Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.003671 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd"} err="failed to get container status \"79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd\": rpc error: code = NotFound desc = could not find container \"79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd\": container with ID starting with 79a128ac0b51c290ec20f61a1d693a928a2e76a8bab5879a8ce6b4f2ac2679cd not found: ID does not exist" Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.032269 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6r9np" podStartSLOduration=1.981203359 podStartE2EDuration="2.032252183s" podCreationTimestamp="2026-02-28 03:50:02 +0000 UTC" firstStartedPulling="2026-02-28 03:50:03.434172097 +0000 UTC m=+858.098211406" lastFinishedPulling="2026-02-28 03:50:03.485220911 +0000 UTC m=+858.149260230" observedRunningTime="2026-02-28 03:50:04.013383502 +0000 UTC m=+858.677422841" watchObservedRunningTime="2026-02-28 03:50:04.032252183 +0000 UTC m=+858.696291492" Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.032576 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537510-46lp8" podStartSLOduration=1.806777441 podStartE2EDuration="4.032568113s" podCreationTimestamp="2026-02-28 03:50:00 +0000 UTC" firstStartedPulling="2026-02-28 03:50:01.359747657 +0000 UTC m=+856.023786966" lastFinishedPulling="2026-02-28 03:50:03.585538329 +0000 UTC m=+858.249577638" observedRunningTime="2026-02-28 03:50:04.0291576 +0000 UTC m=+858.693196909" watchObservedRunningTime="2026-02-28 03:50:04.032568113 +0000 UTC m=+858.696607422" Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.057772 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b5cxk"] Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.062844 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-b5cxk"] Feb 28 03:50:04 crc kubenswrapper[4624]: I0228 03:50:04.093958 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877611c7-6ae9-41ac-aa32-28d0f42c0e14" path="/var/lib/kubelet/pods/877611c7-6ae9-41ac-aa32-28d0f42c0e14/volumes" Feb 28 03:50:05 crc kubenswrapper[4624]: I0228 03:50:05.004424 4624 generic.go:334] "Generic (PLEG): container finished" podID="ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1" containerID="2305eda414191aa7796a7e7928f155097172c0d31b19b4fdd79d7f26e2e3bf41" exitCode=0 Feb 28 03:50:05 crc kubenswrapper[4624]: I0228 03:50:05.004533 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537510-46lp8" event={"ID":"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1","Type":"ContainerDied","Data":"2305eda414191aa7796a7e7928f155097172c0d31b19b4fdd79d7f26e2e3bf41"} Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.147454 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6v4dt"] Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.148182 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6v4dt" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="registry-server" containerID="cri-o://f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd" gracePeriod=2 Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.376784 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.494015 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwkfq\" (UniqueName: \"kubernetes.io/projected/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1-kube-api-access-pwkfq\") pod \"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1\" (UID: \"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1\") " Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.513349 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1-kube-api-access-pwkfq" (OuterVolumeSpecName: "kube-api-access-pwkfq") pod "ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1" (UID: "ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1"). InnerVolumeSpecName "kube-api-access-pwkfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.557218 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.595710 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-utilities\") pod \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.595794 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5sx\" (UniqueName: \"kubernetes.io/projected/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-kube-api-access-cb5sx\") pod \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.595887 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-catalog-content\") pod \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\" (UID: \"bf85ee41-9f19-4bea-91c4-d58b9cb055b4\") " Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.596146 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwkfq\" (UniqueName: \"kubernetes.io/projected/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1-kube-api-access-pwkfq\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.600329 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-utilities" (OuterVolumeSpecName: "utilities") pod "bf85ee41-9f19-4bea-91c4-d58b9cb055b4" (UID: "bf85ee41-9f19-4bea-91c4-d58b9cb055b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.626165 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-kube-api-access-cb5sx" (OuterVolumeSpecName: "kube-api-access-cb5sx") pod "bf85ee41-9f19-4bea-91c4-d58b9cb055b4" (UID: "bf85ee41-9f19-4bea-91c4-d58b9cb055b4"). InnerVolumeSpecName "kube-api-access-cb5sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.652259 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf85ee41-9f19-4bea-91c4-d58b9cb055b4" (UID: "bf85ee41-9f19-4bea-91c4-d58b9cb055b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.697104 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.697411 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5sx\" (UniqueName: \"kubernetes.io/projected/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-kube-api-access-cb5sx\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:06 crc kubenswrapper[4624]: I0228 03:50:06.697480 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf85ee41-9f19-4bea-91c4-d58b9cb055b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.022799 4624 generic.go:334] "Generic (PLEG): container finished" podID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerID="f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd" exitCode=0 Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.023120 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v4dt" event={"ID":"bf85ee41-9f19-4bea-91c4-d58b9cb055b4","Type":"ContainerDied","Data":"f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd"} Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.023209 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v4dt" event={"ID":"bf85ee41-9f19-4bea-91c4-d58b9cb055b4","Type":"ContainerDied","Data":"145e3d15acd29764bd27f78a50af8a83767814f28a7d6427ef4c642965f71d28"} Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.023272 4624 scope.go:117] "RemoveContainer" containerID="f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.023447 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v4dt" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.027597 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537510-46lp8" event={"ID":"ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1","Type":"ContainerDied","Data":"0920aff1babf8e749e66264f7deae5694ade982b3a71d9bd9e889b12d288e1b7"} Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.027679 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0920aff1babf8e749e66264f7deae5694ade982b3a71d9bd9e889b12d288e1b7" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.027806 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537510-46lp8" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.054306 4624 scope.go:117] "RemoveContainer" containerID="1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.066985 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6v4dt"] Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.075157 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6v4dt"] Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.087223 4624 scope.go:117] "RemoveContainer" containerID="33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.116310 4624 scope.go:117] "RemoveContainer" containerID="f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd" Feb 28 03:50:07 crc kubenswrapper[4624]: E0228 03:50:07.116688 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd\": container with ID starting with f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd not found: ID does not exist" containerID="f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.116751 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd"} err="failed to get container status \"f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd\": rpc error: code = NotFound desc = could not find container \"f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd\": container with ID starting with f7558f352019d9763143c85bb9a61856d8b74a838905c76b9b4c52fe5e4651dd not found: ID does not exist" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.116792 4624 scope.go:117] "RemoveContainer" containerID="1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016" Feb 28 03:50:07 crc kubenswrapper[4624]: E0228 03:50:07.117471 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016\": container with ID starting with 1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016 not found: ID does not exist" containerID="1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.117514 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016"} err="failed to get container status \"1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016\": rpc error: code = NotFound desc = could not find container \"1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016\": container with ID starting with 1b8cfb11f61648b80b17075388cdd7931746dc1284d532afdd8fee2e5db2b016 not found: ID does not exist" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.117571 4624 scope.go:117] "RemoveContainer" containerID="33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3" Feb 28 03:50:07 crc kubenswrapper[4624]: E0228 03:50:07.117935 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3\": container with ID starting with 33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3 not found: ID does not exist" containerID="33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.117985 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3"} err="failed to get container status \"33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3\": rpc error: code = NotFound desc = could not find container \"33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3\": container with ID starting with 33b4f3393ce7035b71bba3e12c330a6a97ec4f4f8bc6a6089418858b420ffbb3 not found: ID does not exist" Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.466722 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537504-tbjkt"] Feb 28 03:50:07 crc kubenswrapper[4624]: I0228 03:50:07.474905 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537504-tbjkt"] Feb 28 03:50:08 crc kubenswrapper[4624]: I0228 03:50:08.101288 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" path="/var/lib/kubelet/pods/bf85ee41-9f19-4bea-91c4-d58b9cb055b4/volumes" Feb 28 03:50:08 crc kubenswrapper[4624]: I0228 03:50:08.102543 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc500688-a991-467a-8f21-bb969392f09b" path="/var/lib/kubelet/pods/cc500688-a991-467a-8f21-bb969392f09b/volumes" Feb 28 03:50:12 crc kubenswrapper[4624]: I0228 03:50:12.876500 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:12 crc kubenswrapper[4624]: I0228 03:50:12.877167 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:12 crc kubenswrapper[4624]: I0228 03:50:12.906346 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:13 crc kubenswrapper[4624]: I0228 03:50:13.125548 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6r9np" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.785491 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2"] Feb 28 03:50:15 crc kubenswrapper[4624]: E0228 03:50:15.786358 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="extract-content" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786374 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="extract-content" Feb 28 03:50:15 crc kubenswrapper[4624]: E0228 03:50:15.786392 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="registry-server" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786399 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="registry-server" Feb 28 03:50:15 crc kubenswrapper[4624]: E0228 03:50:15.786418 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877611c7-6ae9-41ac-aa32-28d0f42c0e14" containerName="registry-server" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786426 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="877611c7-6ae9-41ac-aa32-28d0f42c0e14" containerName="registry-server" Feb 28 03:50:15 crc kubenswrapper[4624]: E0228 03:50:15.786436 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1" containerName="oc" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786442 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1" containerName="oc" Feb 28 03:50:15 crc kubenswrapper[4624]: E0228 03:50:15.786454 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="extract-utilities" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786460 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="extract-utilities" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786777 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="877611c7-6ae9-41ac-aa32-28d0f42c0e14" containerName="registry-server" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786793 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1" containerName="oc" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.786804 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf85ee41-9f19-4bea-91c4-d58b9cb055b4" containerName="registry-server" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.788059 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.791939 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nv8nx" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.807415 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2"] Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.837960 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvpjx\" (UniqueName: \"kubernetes.io/projected/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-kube-api-access-zvpjx\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.838134 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-bundle\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.838202 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-util\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.939838 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvpjx\" (UniqueName: \"kubernetes.io/projected/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-kube-api-access-zvpjx\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.939948 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-bundle\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.940028 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-util\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.940712 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-bundle\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.940712 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-util\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:15 crc kubenswrapper[4624]: I0228 03:50:15.961487 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvpjx\" (UniqueName: \"kubernetes.io/projected/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-kube-api-access-zvpjx\") pod \"bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:16 crc kubenswrapper[4624]: I0228 03:50:16.111673 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:16 crc kubenswrapper[4624]: I0228 03:50:16.587747 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2"] Feb 28 03:50:17 crc kubenswrapper[4624]: I0228 03:50:17.121611 4624 generic.go:334] "Generic (PLEG): container finished" podID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerID="0c57c1f9f49e455315cab4e7637cab054d988741038a74bb8204907850ec3c7b" exitCode=0 Feb 28 03:50:17 crc kubenswrapper[4624]: I0228 03:50:17.121672 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" event={"ID":"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347","Type":"ContainerDied","Data":"0c57c1f9f49e455315cab4e7637cab054d988741038a74bb8204907850ec3c7b"} Feb 28 03:50:17 crc kubenswrapper[4624]: I0228 03:50:17.121986 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" event={"ID":"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347","Type":"ContainerStarted","Data":"7366c64413037ad492194b0ee17a0ef3d1694e3197789254b2284409cd9b4363"} Feb 28 03:50:18 crc kubenswrapper[4624]: I0228 03:50:18.131528 4624 generic.go:334] "Generic (PLEG): container finished" podID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerID="072b88045f7dfe67760d547d7808fd30320b48e5df39428b0d60100716104a06" exitCode=0 Feb 28 03:50:18 crc kubenswrapper[4624]: I0228 03:50:18.131608 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" event={"ID":"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347","Type":"ContainerDied","Data":"072b88045f7dfe67760d547d7808fd30320b48e5df39428b0d60100716104a06"} Feb 28 03:50:19 crc kubenswrapper[4624]: I0228 03:50:19.143443 4624 generic.go:334] "Generic (PLEG): container finished" podID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerID="51373030245fc7b5ce3a6faa31bc307183a0b13bb00769690b1b98a0064ccff3" exitCode=0 Feb 28 03:50:19 crc kubenswrapper[4624]: I0228 03:50:19.143950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" event={"ID":"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347","Type":"ContainerDied","Data":"51373030245fc7b5ce3a6faa31bc307183a0b13bb00769690b1b98a0064ccff3"} Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.411210 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.426344 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-util\") pod \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.426451 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-bundle\") pod \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.426578 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvpjx\" (UniqueName: \"kubernetes.io/projected/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-kube-api-access-zvpjx\") pod \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\" (UID: \"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347\") " Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.433007 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-bundle" (OuterVolumeSpecName: "bundle") pod "7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" (UID: "7ddaf0c6-c923-45ba-ad47-fcfd5a96e347"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.440319 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-kube-api-access-zvpjx" (OuterVolumeSpecName: "kube-api-access-zvpjx") pod "7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" (UID: "7ddaf0c6-c923-45ba-ad47-fcfd5a96e347"). InnerVolumeSpecName "kube-api-access-zvpjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.456335 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-util" (OuterVolumeSpecName: "util") pod "7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" (UID: "7ddaf0c6-c923-45ba-ad47-fcfd5a96e347"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.532188 4624 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-util\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.532236 4624 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:20 crc kubenswrapper[4624]: I0228 03:50:20.532247 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvpjx\" (UniqueName: \"kubernetes.io/projected/7ddaf0c6-c923-45ba-ad47-fcfd5a96e347-kube-api-access-zvpjx\") on node \"crc\" DevicePath \"\"" Feb 28 03:50:21 crc kubenswrapper[4624]: I0228 03:50:21.162829 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" event={"ID":"7ddaf0c6-c923-45ba-ad47-fcfd5a96e347","Type":"ContainerDied","Data":"7366c64413037ad492194b0ee17a0ef3d1694e3197789254b2284409cd9b4363"} Feb 28 03:50:21 crc kubenswrapper[4624]: I0228 03:50:21.162899 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7366c64413037ad492194b0ee17a0ef3d1694e3197789254b2284409cd9b4363" Feb 28 03:50:21 crc kubenswrapper[4624]: I0228 03:50:21.163009 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.291297 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78"] Feb 28 03:50:23 crc kubenswrapper[4624]: E0228 03:50:23.291904 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="extract" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.291919 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="extract" Feb 28 03:50:23 crc kubenswrapper[4624]: E0228 03:50:23.291936 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="pull" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.291943 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="pull" Feb 28 03:50:23 crc kubenswrapper[4624]: E0228 03:50:23.291951 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="util" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.291958 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="util" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.292155 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddaf0c6-c923-45ba-ad47-fcfd5a96e347" containerName="extract" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.292709 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.296436 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-tlvr2" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.320188 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78"] Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.383534 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bg9\" (UniqueName: \"kubernetes.io/projected/58d2ada8-fb04-4054-bba9-e2742bddbce5-kube-api-access-m5bg9\") pod \"openstack-operator-controller-init-596b9db54c-pdc78\" (UID: \"58d2ada8-fb04-4054-bba9-e2742bddbce5\") " pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.484670 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5bg9\" (UniqueName: \"kubernetes.io/projected/58d2ada8-fb04-4054-bba9-e2742bddbce5-kube-api-access-m5bg9\") pod \"openstack-operator-controller-init-596b9db54c-pdc78\" (UID: \"58d2ada8-fb04-4054-bba9-e2742bddbce5\") " pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.505628 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5bg9\" (UniqueName: \"kubernetes.io/projected/58d2ada8-fb04-4054-bba9-e2742bddbce5-kube-api-access-m5bg9\") pod \"openstack-operator-controller-init-596b9db54c-pdc78\" (UID: \"58d2ada8-fb04-4054-bba9-e2742bddbce5\") " pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.623456 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:23 crc kubenswrapper[4624]: I0228 03:50:23.918778 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78"] Feb 28 03:50:24 crc kubenswrapper[4624]: I0228 03:50:24.199305 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" event={"ID":"58d2ada8-fb04-4054-bba9-e2742bddbce5","Type":"ContainerStarted","Data":"f91b195a4d0760cc47f12ba49fdf92d7569a8c591d86bd9f279cd7cd6be1d5b6"} Feb 28 03:50:29 crc kubenswrapper[4624]: I0228 03:50:29.242343 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" event={"ID":"58d2ada8-fb04-4054-bba9-e2742bddbce5","Type":"ContainerStarted","Data":"2eec0bbe8201d34924616bd558f50902b2a18482263473d1be7aaf516090a973"} Feb 28 03:50:29 crc kubenswrapper[4624]: I0228 03:50:29.243353 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:29 crc kubenswrapper[4624]: I0228 03:50:29.275708 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" podStartSLOduration=1.565456344 podStartE2EDuration="6.275689896s" podCreationTimestamp="2026-02-28 03:50:23 +0000 UTC" firstStartedPulling="2026-02-28 03:50:23.942414461 +0000 UTC m=+878.606453770" lastFinishedPulling="2026-02-28 03:50:28.652648013 +0000 UTC m=+883.316687322" observedRunningTime="2026-02-28 03:50:29.271272815 +0000 UTC m=+883.935312124" watchObservedRunningTime="2026-02-28 03:50:29.275689896 +0000 UTC m=+883.939729205" Feb 28 03:50:33 crc kubenswrapper[4624]: I0228 03:50:33.627781 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-596b9db54c-pdc78" Feb 28 03:50:47 crc kubenswrapper[4624]: I0228 03:50:47.002789 4624 scope.go:117] "RemoveContainer" containerID="bc3a3e29cb1a37d6d1bff38a6837a6b22b52144cd5908e9c33d3f9c67a322598" Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.934390 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4"] Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.935647 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.939136 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4s4kq" Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.955800 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg"] Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.956866 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.958267 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4"] Feb 28 03:50:53 crc kubenswrapper[4624]: I0228 03:50:53.963749 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-kfzgc" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.002166 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.010178 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.011731 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.016963 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fhwmr" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.046281 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.063440 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.072697 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.077737 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fwbck" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.087145 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-585b788787-b97cm"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.090563 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.099852 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k2h99" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.129298 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84cw\" (UniqueName: \"kubernetes.io/projected/f62e258f-732a-4da1-8670-475725509310-kube-api-access-m84cw\") pod \"designate-operator-controller-manager-55cc45767f-5rdgn\" (UID: \"f62e258f-732a-4da1-8670-475725509310\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.129353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d58tk\" (UniqueName: \"kubernetes.io/projected/4833820c-a44e-4eb4-8716-bab85def7811-kube-api-access-d58tk\") pod \"barbican-operator-controller-manager-999d845f-jrsj4\" (UID: \"4833820c-a44e-4eb4-8716-bab85def7811\") " pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.129384 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xht22\" (UniqueName: \"kubernetes.io/projected/50df7aca-97ff-41dc-92cc-143cb02acea8-kube-api-access-xht22\") pod \"cinder-operator-controller-manager-768c8b45bb-gkxmg\" (UID: \"50df7aca-97ff-41dc-92cc-143cb02acea8\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.139456 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.159251 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-585b788787-b97cm"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.174149 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.174944 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.177365 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4s8sl" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.185690 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.189529 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-c77466965-f8x9g"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.191826 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.208984 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.209158 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kcp5w" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.220134 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.221135 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.226504 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-k2bjh" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.230794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m84cw\" (UniqueName: \"kubernetes.io/projected/f62e258f-732a-4da1-8670-475725509310-kube-api-access-m84cw\") pod \"designate-operator-controller-manager-55cc45767f-5rdgn\" (UID: \"f62e258f-732a-4da1-8670-475725509310\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.230867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d58tk\" (UniqueName: \"kubernetes.io/projected/4833820c-a44e-4eb4-8716-bab85def7811-kube-api-access-d58tk\") pod \"barbican-operator-controller-manager-999d845f-jrsj4\" (UID: \"4833820c-a44e-4eb4-8716-bab85def7811\") " pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.230903 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5rg\" (UniqueName: \"kubernetes.io/projected/a5b6c7a0-a640-4faa-836c-7c5d0c29acd9-kube-api-access-bd5rg\") pod \"glance-operator-controller-manager-7f748f8b74-vdhdb\" (UID: \"a5b6c7a0-a640-4faa-836c-7c5d0c29acd9\") " pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.230922 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xht22\" (UniqueName: \"kubernetes.io/projected/50df7aca-97ff-41dc-92cc-143cb02acea8-kube-api-access-xht22\") pod \"cinder-operator-controller-manager-768c8b45bb-gkxmg\" (UID: \"50df7aca-97ff-41dc-92cc-143cb02acea8\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.230993 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj459\" (UniqueName: \"kubernetes.io/projected/8507d808-bdf4-47f7-adb9-e3746c4768bf-kube-api-access-zj459\") pod \"heat-operator-controller-manager-585b788787-b97cm\" (UID: \"8507d808-bdf4-47f7-adb9-e3746c4768bf\") " pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.243909 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.250694 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-c77466965-f8x9g"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.281515 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.282628 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.287750 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d58tk\" (UniqueName: \"kubernetes.io/projected/4833820c-a44e-4eb4-8716-bab85def7811-kube-api-access-d58tk\") pod \"barbican-operator-controller-manager-999d845f-jrsj4\" (UID: \"4833820c-a44e-4eb4-8716-bab85def7811\") " pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.296197 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xht22\" (UniqueName: \"kubernetes.io/projected/50df7aca-97ff-41dc-92cc-143cb02acea8-kube-api-access-xht22\") pod \"cinder-operator-controller-manager-768c8b45bb-gkxmg\" (UID: \"50df7aca-97ff-41dc-92cc-143cb02acea8\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.305015 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m84cw\" (UniqueName: \"kubernetes.io/projected/f62e258f-732a-4da1-8670-475725509310-kube-api-access-m84cw\") pod \"designate-operator-controller-manager-55cc45767f-5rdgn\" (UID: \"f62e258f-732a-4da1-8670-475725509310\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.326223 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-knmpc"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.327039 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.333120 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.339199 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj459\" (UniqueName: \"kubernetes.io/projected/8507d808-bdf4-47f7-adb9-e3746c4768bf-kube-api-access-zj459\") pod \"heat-operator-controller-manager-585b788787-b97cm\" (UID: \"8507d808-bdf4-47f7-adb9-e3746c4768bf\") " pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344310 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vftp\" (UniqueName: \"kubernetes.io/projected/2ef92f5a-9f82-40fb-81a2-c4a75aec60cf-kube-api-access-6vftp\") pod \"ironic-operator-controller-manager-8784b4656-8vq2d\" (UID: \"2ef92f5a-9f82-40fb-81a2-c4a75aec60cf\") " pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344406 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjg2\" (UniqueName: \"kubernetes.io/projected/5013f14b-e7ba-400b-8a1e-d187991a0e49-kube-api-access-2tjg2\") pod \"horizon-operator-controller-manager-7db95d7ffb-k68gx\" (UID: \"5013f14b-e7ba-400b-8a1e-d187991a0e49\") " pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344511 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmth\" (UniqueName: \"kubernetes.io/projected/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-kube-api-access-rcmth\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344617 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2s6n\" (UniqueName: \"kubernetes.io/projected/3e549f4d-18e0-49cf-a82e-efde664ab810-kube-api-access-c2s6n\") pod \"manila-operator-controller-manager-76fd76856-knmpc\" (UID: \"3e549f4d-18e0-49cf-a82e-efde664ab810\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344702 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq25z\" (UniqueName: \"kubernetes.io/projected/84190c06-4523-4d3d-ab8c-cec0aca7c393-kube-api-access-xq25z\") pod \"keystone-operator-controller-manager-78b64779b9-rvz6s\" (UID: \"84190c06-4523-4d3d-ab8c-cec0aca7c393\") " pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5rg\" (UniqueName: \"kubernetes.io/projected/a5b6c7a0-a640-4faa-836c-7c5d0c29acd9-kube-api-access-bd5rg\") pod \"glance-operator-controller-manager-7f748f8b74-vdhdb\" (UID: \"a5b6c7a0-a640-4faa-836c-7c5d0c29acd9\") " pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.344885 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.346274 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.356912 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-knmpc"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.379072 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.380302 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.382036 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dqmln" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.382403 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6vd5z" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.382739 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7jwgs" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.434765 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5rg\" (UniqueName: \"kubernetes.io/projected/a5b6c7a0-a640-4faa-836c-7c5d0c29acd9-kube-api-access-bd5rg\") pod \"glance-operator-controller-manager-7f748f8b74-vdhdb\" (UID: \"a5b6c7a0-a640-4faa-836c-7c5d0c29acd9\") " pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.443160 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.447827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj459\" (UniqueName: \"kubernetes.io/projected/8507d808-bdf4-47f7-adb9-e3746c4768bf-kube-api-access-zj459\") pod \"heat-operator-controller-manager-585b788787-b97cm\" (UID: \"8507d808-bdf4-47f7-adb9-e3746c4768bf\") " pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.452881 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vftp\" (UniqueName: \"kubernetes.io/projected/2ef92f5a-9f82-40fb-81a2-c4a75aec60cf-kube-api-access-6vftp\") pod \"ironic-operator-controller-manager-8784b4656-8vq2d\" (UID: \"2ef92f5a-9f82-40fb-81a2-c4a75aec60cf\") " pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.452962 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjg2\" (UniqueName: \"kubernetes.io/projected/5013f14b-e7ba-400b-8a1e-d187991a0e49-kube-api-access-2tjg2\") pod \"horizon-operator-controller-manager-7db95d7ffb-k68gx\" (UID: \"5013f14b-e7ba-400b-8a1e-d187991a0e49\") " pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.453033 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52ng\" (UniqueName: \"kubernetes.io/projected/f3c08c1c-5646-48e9-9c9a-537b7619ecb0-kube-api-access-c52ng\") pod \"mariadb-operator-controller-manager-745fc45789-tvr7t\" (UID: \"f3c08c1c-5646-48e9-9c9a-537b7619ecb0\") " pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.453126 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmth\" (UniqueName: \"kubernetes.io/projected/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-kube-api-access-rcmth\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.453272 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2s6n\" (UniqueName: \"kubernetes.io/projected/3e549f4d-18e0-49cf-a82e-efde664ab810-kube-api-access-c2s6n\") pod \"manila-operator-controller-manager-76fd76856-knmpc\" (UID: \"3e549f4d-18e0-49cf-a82e-efde664ab810\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.453296 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq25z\" (UniqueName: \"kubernetes.io/projected/84190c06-4523-4d3d-ab8c-cec0aca7c393-kube-api-access-xq25z\") pod \"keystone-operator-controller-manager-78b64779b9-rvz6s\" (UID: \"84190c06-4523-4d3d-ab8c-cec0aca7c393\") " pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.453397 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:54 crc kubenswrapper[4624]: E0228 03:50:54.453591 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:54 crc kubenswrapper[4624]: E0228 03:50:54.453666 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert podName:a96b8e7a-1320-4ede-9f43-ec80e2d562c9 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:54.953630659 +0000 UTC m=+909.617669968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert") pod "infra-operator-controller-manager-c77466965-f8x9g" (UID: "a96b8e7a-1320-4ede-9f43-ec80e2d562c9") : secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.492321 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.496305 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.513646 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nvlgp" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.517916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2s6n\" (UniqueName: \"kubernetes.io/projected/3e549f4d-18e0-49cf-a82e-efde664ab810-kube-api-access-c2s6n\") pod \"manila-operator-controller-manager-76fd76856-knmpc\" (UID: \"3e549f4d-18e0-49cf-a82e-efde664ab810\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.522424 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vftp\" (UniqueName: \"kubernetes.io/projected/2ef92f5a-9f82-40fb-81a2-c4a75aec60cf-kube-api-access-6vftp\") pod \"ironic-operator-controller-manager-8784b4656-8vq2d\" (UID: \"2ef92f5a-9f82-40fb-81a2-c4a75aec60cf\") " pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.523654 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjg2\" (UniqueName: \"kubernetes.io/projected/5013f14b-e7ba-400b-8a1e-d187991a0e49-kube-api-access-2tjg2\") pod \"horizon-operator-controller-manager-7db95d7ffb-k68gx\" (UID: \"5013f14b-e7ba-400b-8a1e-d187991a0e49\") " pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.533790 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmth\" (UniqueName: \"kubernetes.io/projected/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-kube-api-access-rcmth\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.537007 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq25z\" (UniqueName: \"kubernetes.io/projected/84190c06-4523-4d3d-ab8c-cec0aca7c393-kube-api-access-xq25z\") pod \"keystone-operator-controller-manager-78b64779b9-rvz6s\" (UID: \"84190c06-4523-4d3d-ab8c-cec0aca7c393\") " pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.552216 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.553904 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.554671 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfccl\" (UniqueName: \"kubernetes.io/projected/6497616d-eb08-4bd4-b3a0-8ee000cdfe47-kube-api-access-qfccl\") pod \"neutron-operator-controller-manager-768f998cf4-dv9vf\" (UID: \"6497616d-eb08-4bd4-b3a0-8ee000cdfe47\") " pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.554766 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52ng\" (UniqueName: \"kubernetes.io/projected/f3c08c1c-5646-48e9-9c9a-537b7619ecb0-kube-api-access-c52ng\") pod \"mariadb-operator-controller-manager-745fc45789-tvr7t\" (UID: \"f3c08c1c-5646-48e9-9c9a-537b7619ecb0\") " pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.555479 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.555670 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.563145 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.567370 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.573304 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-whpcp" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.574847 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.594513 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.595856 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.599996 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.603446 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cdjs5" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.616394 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52ng\" (UniqueName: \"kubernetes.io/projected/f3c08c1c-5646-48e9-9c9a-537b7619ecb0-kube-api-access-c52ng\") pod \"mariadb-operator-controller-manager-745fc45789-tvr7t\" (UID: \"f3c08c1c-5646-48e9-9c9a-537b7619ecb0\") " pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.660957 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.661647 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfccl\" (UniqueName: \"kubernetes.io/projected/6497616d-eb08-4bd4-b3a0-8ee000cdfe47-kube-api-access-qfccl\") pod \"neutron-operator-controller-manager-768f998cf4-dv9vf\" (UID: \"6497616d-eb08-4bd4-b3a0-8ee000cdfe47\") " pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.686239 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.701965 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.702829 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.705735 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.731504 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.731698 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-kpv5g" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.757971 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.769851 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.776590 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfccl\" (UniqueName: \"kubernetes.io/projected/6497616d-eb08-4bd4-b3a0-8ee000cdfe47-kube-api-access-qfccl\") pod \"neutron-operator-controller-manager-768f998cf4-dv9vf\" (UID: \"6497616d-eb08-4bd4-b3a0-8ee000cdfe47\") " pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.792240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82nf\" (UniqueName: \"kubernetes.io/projected/797119fd-2208-40d7-86c8-594e59529182-kube-api-access-t82nf\") pod \"nova-operator-controller-manager-6c67ff7674-psffj\" (UID: \"797119fd-2208-40d7-86c8-594e59529182\") " pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.792375 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzxw\" (UniqueName: \"kubernetes.io/projected/8cb779fb-ff77-468c-9198-065b3e4bf393-kube-api-access-mxzxw\") pod \"octavia-operator-controller-manager-cc79fdffd-xw2s7\" (UID: \"8cb779fb-ff77-468c-9198-065b3e4bf393\") " pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.798124 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.809967 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.832657 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.886410 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.895886 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.895917 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.895975 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82nf\" (UniqueName: \"kubernetes.io/projected/797119fd-2208-40d7-86c8-594e59529182-kube-api-access-t82nf\") pod \"nova-operator-controller-manager-6c67ff7674-psffj\" (UID: \"797119fd-2208-40d7-86c8-594e59529182\") " pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.896026 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzxw\" (UniqueName: \"kubernetes.io/projected/8cb779fb-ff77-468c-9198-065b3e4bf393-kube-api-access-mxzxw\") pod \"octavia-operator-controller-manager-cc79fdffd-xw2s7\" (UID: \"8cb779fb-ff77-468c-9198-065b3e4bf393\") " pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.896067 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grpff\" (UniqueName: \"kubernetes.io/projected/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-kube-api-access-grpff\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.904905 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww"] Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.910877 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.912053 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4bstx" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.923654 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g92rk" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.971572 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzxw\" (UniqueName: \"kubernetes.io/projected/8cb779fb-ff77-468c-9198-065b3e4bf393-kube-api-access-mxzxw\") pod \"octavia-operator-controller-manager-cc79fdffd-xw2s7\" (UID: \"8cb779fb-ff77-468c-9198-065b3e4bf393\") " pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:50:54 crc kubenswrapper[4624]: I0228 03:50:54.983100 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82nf\" (UniqueName: \"kubernetes.io/projected/797119fd-2208-40d7-86c8-594e59529182-kube-api-access-t82nf\") pod \"nova-operator-controller-manager-6c67ff7674-psffj\" (UID: \"797119fd-2208-40d7-86c8-594e59529182\") " pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.004744 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.004880 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hsx\" (UniqueName: \"kubernetes.io/projected/582d0963-7f3a-4664-85e4-9148c495eb1a-kube-api-access-99hsx\") pod \"placement-operator-controller-manager-bff955cc4-x8vll\" (UID: \"582d0963-7f3a-4664-85e4-9148c495eb1a\") " pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.005023 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grpff\" (UniqueName: \"kubernetes.io/projected/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-kube-api-access-grpff\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.005053 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfdq\" (UniqueName: \"kubernetes.io/projected/c6a151b1-0add-4b07-aa32-9a9e0dc2f526-kube-api-access-wrfdq\") pod \"ovn-operator-controller-manager-684c7d77b-c6gww\" (UID: \"c6a151b1-0add-4b07-aa32-9a9e0dc2f526\") " pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.005147 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.005577 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.005651 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert podName:a96b8e7a-1320-4ede-9f43-ec80e2d562c9 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:56.005633537 +0000 UTC m=+910.669672846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert") pod "infra-operator-controller-manager-c77466965-f8x9g" (UID: "a96b8e7a-1320-4ede-9f43-ec80e2d562c9") : secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.007020 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.007135 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert podName:9776c87a-53fb-404c-8bbe-0fbeb07eda0d nodeName:}" failed. No retries permitted until 2026-02-28 03:50:55.507111537 +0000 UTC m=+910.171150846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" (UID: "9776c87a-53fb-404c-8bbe-0fbeb07eda0d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.016231 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.018248 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.032423 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jtrzj" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.100642 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grpff\" (UniqueName: \"kubernetes.io/projected/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-kube-api-access-grpff\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.109590 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.111130 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rv54\" (UniqueName: \"kubernetes.io/projected/b26b01f4-0d96-4a5b-bb71-58d691b92119-kube-api-access-5rv54\") pod \"swift-operator-controller-manager-55f4bf89cb-54l7x\" (UID: \"b26b01f4-0d96-4a5b-bb71-58d691b92119\") " pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.111258 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hsx\" (UniqueName: \"kubernetes.io/projected/582d0963-7f3a-4664-85e4-9148c495eb1a-kube-api-access-99hsx\") pod \"placement-operator-controller-manager-bff955cc4-x8vll\" (UID: \"582d0963-7f3a-4664-85e4-9148c495eb1a\") " pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.111322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrfdq\" (UniqueName: \"kubernetes.io/projected/c6a151b1-0add-4b07-aa32-9a9e0dc2f526-kube-api-access-wrfdq\") pod \"ovn-operator-controller-manager-684c7d77b-c6gww\" (UID: \"c6a151b1-0add-4b07-aa32-9a9e0dc2f526\") " pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.122354 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.135943 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.156131 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.169946 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.175880 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hsx\" (UniqueName: \"kubernetes.io/projected/582d0963-7f3a-4664-85e4-9148c495eb1a-kube-api-access-99hsx\") pod \"placement-operator-controller-manager-bff955cc4-x8vll\" (UID: \"582d0963-7f3a-4664-85e4-9148c495eb1a\") " pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.178594 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nh88j" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.179480 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrfdq\" (UniqueName: \"kubernetes.io/projected/c6a151b1-0add-4b07-aa32-9a9e0dc2f526-kube-api-access-wrfdq\") pod \"ovn-operator-controller-manager-684c7d77b-c6gww\" (UID: \"c6a151b1-0add-4b07-aa32-9a9e0dc2f526\") " pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.193097 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.194738 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.195849 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.197119 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7l9tb" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.214197 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5cv\" (UniqueName: \"kubernetes.io/projected/39ee7326-c4c7-4dee-a749-35da4ff62746-kube-api-access-8c5cv\") pod \"test-operator-controller-manager-8467ccb4c8-lfkwt\" (UID: \"39ee7326-c4c7-4dee-a749-35da4ff62746\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.214266 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9vlk\" (UniqueName: \"kubernetes.io/projected/d6722506-d5dd-4fb4-b81a-d27c5dab59dd-kube-api-access-c9vlk\") pod \"telemetry-operator-controller-manager-56dc67d744-tgr2z\" (UID: \"d6722506-d5dd-4fb4-b81a-d27c5dab59dd\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.214310 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rv54\" (UniqueName: \"kubernetes.io/projected/b26b01f4-0d96-4a5b-bb71-58d691b92119-kube-api-access-5rv54\") pod \"swift-operator-controller-manager-55f4bf89cb-54l7x\" (UID: \"b26b01f4-0d96-4a5b-bb71-58d691b92119\") " pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.263550 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.286557 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.305057 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.326101 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9vlk\" (UniqueName: \"kubernetes.io/projected/d6722506-d5dd-4fb4-b81a-d27c5dab59dd-kube-api-access-c9vlk\") pod \"telemetry-operator-controller-manager-56dc67d744-tgr2z\" (UID: \"d6722506-d5dd-4fb4-b81a-d27c5dab59dd\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.326679 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c5cv\" (UniqueName: \"kubernetes.io/projected/39ee7326-c4c7-4dee-a749-35da4ff62746-kube-api-access-8c5cv\") pod \"test-operator-controller-manager-8467ccb4c8-lfkwt\" (UID: \"39ee7326-c4c7-4dee-a749-35da4ff62746\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.342611 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.343267 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rv54\" (UniqueName: \"kubernetes.io/projected/b26b01f4-0d96-4a5b-bb71-58d691b92119-kube-api-access-5rv54\") pod \"swift-operator-controller-manager-55f4bf89cb-54l7x\" (UID: \"b26b01f4-0d96-4a5b-bb71-58d691b92119\") " pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.343752 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.348131 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8qskv" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.367419 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.377819 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c5cv\" (UniqueName: \"kubernetes.io/projected/39ee7326-c4c7-4dee-a749-35da4ff62746-kube-api-access-8c5cv\") pod \"test-operator-controller-manager-8467ccb4c8-lfkwt\" (UID: \"39ee7326-c4c7-4dee-a749-35da4ff62746\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.388264 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.412508 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.419766 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9vlk\" (UniqueName: \"kubernetes.io/projected/d6722506-d5dd-4fb4-b81a-d27c5dab59dd-kube-api-access-c9vlk\") pod \"telemetry-operator-controller-manager-56dc67d744-tgr2z\" (UID: \"d6722506-d5dd-4fb4-b81a-d27c5dab59dd\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.436448 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.437599 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2hs2\" (UniqueName: \"kubernetes.io/projected/195b486b-92db-481a-9478-7a3edfeb79ae-kube-api-access-c2hs2\") pod \"watcher-operator-controller-manager-65c9f4f6b-7w84p\" (UID: \"195b486b-92db-481a-9478-7a3edfeb79ae\") " pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.465395 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.540526 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.540641 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2hs2\" (UniqueName: \"kubernetes.io/projected/195b486b-92db-481a-9478-7a3edfeb79ae-kube-api-access-c2hs2\") pod \"watcher-operator-controller-manager-65c9f4f6b-7w84p\" (UID: \"195b486b-92db-481a-9478-7a3edfeb79ae\") " pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.541211 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.541267 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert podName:9776c87a-53fb-404c-8bbe-0fbeb07eda0d nodeName:}" failed. No retries permitted until 2026-02-28 03:50:56.541246852 +0000 UTC m=+911.205286161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" (UID: "9776c87a-53fb-404c-8bbe-0fbeb07eda0d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.573299 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.573859 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2hs2\" (UniqueName: \"kubernetes.io/projected/195b486b-92db-481a-9478-7a3edfeb79ae-kube-api-access-c2hs2\") pod \"watcher-operator-controller-manager-65c9f4f6b-7w84p\" (UID: \"195b486b-92db-481a-9478-7a3edfeb79ae\") " pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.630831 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.633868 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.641225 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-98bg9" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.641436 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.641546 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.676986 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.709948 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.710887 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.710976 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.725220 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-c44gr" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.732657 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn"] Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.748455 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.748505 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2lkw\" (UniqueName: \"kubernetes.io/projected/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-kube-api-access-h2lkw\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.748550 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccl7\" (UniqueName: \"kubernetes.io/projected/154dfd82-a449-4812-bdd5-3e9c8a474b3d-kube-api-access-mccl7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7d47r\" (UID: \"154dfd82-a449-4812-bdd5-3e9c8a474b3d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.748621 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.826162 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.851186 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.851282 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.851342 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2lkw\" (UniqueName: \"kubernetes.io/projected/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-kube-api-access-h2lkw\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.851392 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mccl7\" (UniqueName: \"kubernetes.io/projected/154dfd82-a449-4812-bdd5-3e9c8a474b3d-kube-api-access-mccl7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7d47r\" (UID: \"154dfd82-a449-4812-bdd5-3e9c8a474b3d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.851696 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.851815 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:56.351781717 +0000 UTC m=+911.015821026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "webhook-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.852020 4624 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: E0228 03:50:55.852046 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:56.352036634 +0000 UTC m=+911.016075933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "metrics-server-cert" not found Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.891203 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mccl7\" (UniqueName: \"kubernetes.io/projected/154dfd82-a449-4812-bdd5-3e9c8a474b3d-kube-api-access-mccl7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7d47r\" (UID: \"154dfd82-a449-4812-bdd5-3e9c8a474b3d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" Feb 28 03:50:55 crc kubenswrapper[4624]: I0228 03:50:55.905002 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2lkw\" (UniqueName: \"kubernetes.io/projected/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-kube-api-access-h2lkw\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.046193 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4"] Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.054499 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.054755 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.054826 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert podName:a96b8e7a-1320-4ede-9f43-ec80e2d562c9 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:58.054802492 +0000 UTC m=+912.718841801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert") pod "infra-operator-controller-manager-c77466965-f8x9g" (UID: "a96b8e7a-1320-4ede-9f43-ec80e2d562c9") : secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.105750 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.372017 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.372362 4624 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.372416 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.372440 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:57.372421479 +0000 UTC m=+912.036460788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "metrics-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.372591 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.372665 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:57.372649666 +0000 UTC m=+912.036688975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "webhook-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.466465 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg"] Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.567794 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d"] Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.599273 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.601912 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: E0228 03:50:56.602571 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert podName:9776c87a-53fb-404c-8bbe-0fbeb07eda0d nodeName:}" failed. No retries permitted until 2026-02-28 03:50:58.602553581 +0000 UTC m=+913.266592880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" (UID: "9776c87a-53fb-404c-8bbe-0fbeb07eda0d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.612258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" event={"ID":"50df7aca-97ff-41dc-92cc-143cb02acea8","Type":"ContainerStarted","Data":"65beae4010d026235669186d327b3dec5e77540b53072c8a32b3511ecc589644"} Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.613587 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" event={"ID":"f62e258f-732a-4da1-8670-475725509310","Type":"ContainerStarted","Data":"f98bfea38d42387b839e43a1006fea135f641df18714a539164c6e5268606aa2"} Feb 28 03:50:56 crc kubenswrapper[4624]: I0228 03:50:56.614445 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" event={"ID":"4833820c-a44e-4eb4-8716-bab85def7811","Type":"ContainerStarted","Data":"57b626f92bd37a1180374b3086f74f2815ca3277bf8d74f03a578d0dafc7e01a"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.064444 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.102326 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.134938 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.144899 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.150144 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.203218 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb"] Feb 28 03:50:57 crc kubenswrapper[4624]: W0228 03:50:57.205437 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6497616d_eb08_4bd4_b3a0_8ee000cdfe47.slice/crio-5ff64af81208e4d3260625ee2604d896828a48b3cea4c91366d45a52c8aefbb5 WatchSource:0}: Error finding container 5ff64af81208e4d3260625ee2604d896828a48b3cea4c91366d45a52c8aefbb5: Status 404 returned error can't find the container with id 5ff64af81208e4d3260625ee2604d896828a48b3cea4c91366d45a52c8aefbb5 Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.210290 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.216950 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-knmpc"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.222254 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.237292 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-585b788787-b97cm"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.252037 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.260856 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt"] Feb 28 03:50:57 crc kubenswrapper[4624]: W0228 03:50:57.269949 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c08c1c_5646_48e9_9c9a_537b7619ecb0.slice/crio-50b3fa701df647be8f148c611a29e5121f79fcbb078b72552e04ac23e409f137 WatchSource:0}: Error finding container 50b3fa701df647be8f148c611a29e5121f79fcbb078b72552e04ac23e409f137: Status 404 returned error can't find the container with id 50b3fa701df647be8f148c611a29e5121f79fcbb078b72552e04ac23e409f137 Feb 28 03:50:57 crc kubenswrapper[4624]: W0228 03:50:57.275959 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod797119fd_2208_40d7_86c8_594e59529182.slice/crio-837c6bde2982aa2da04e659936c2f7560f3c63c8fd9a77f467007dcd99364aac WatchSource:0}: Error finding container 837c6bde2982aa2da04e659936c2f7560f3c63c8fd9a77f467007dcd99364aac: Status 404 returned error can't find the container with id 837c6bde2982aa2da04e659936c2f7560f3c63c8fd9a77f467007dcd99364aac Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.278923 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.298779 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.341296 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z"] Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.344639 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x"] Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.407649 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:14c0fc05afebbccb71f9ac9a6913125154a886b697f21002c77d7d1151e26b8e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c52ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-745fc45789-tvr7t_openstack-operators(f3c08c1c-5646-48e9-9c9a-537b7619ecb0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.408030 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6f2cb7c21f4c284ce007f6a00ed4ac1e073036e50efae6285c3ee8d3fe1ae5e3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t82nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6c67ff7674-psffj_openstack-operators(797119fd-2208-40d7-86c8-594e59529182): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.409748 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" podUID="797119fd-2208-40d7-86c8-594e59529182" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.409791 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" podUID="f3c08c1c-5646-48e9-9c9a-537b7619ecb0" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.426356 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:66ad64ef3a56951e87fc73c893d08fa3807524876a8c461fb5060a29240bc71d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rv54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-55f4bf89cb-54l7x_openstack-operators(b26b01f4-0d96-4a5b-bb71-58d691b92119): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.428698 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" podUID="b26b01f4-0d96-4a5b-bb71-58d691b92119" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.429374 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mccl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7d47r_openstack-operators(154dfd82-a449-4812-bdd5-3e9c8a474b3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.431520 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" podUID="154dfd82-a449-4812-bdd5-3e9c8a474b3d" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.446643 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.446847 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.446973 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.447121 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:59.447065662 +0000 UTC m=+914.111104971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "webhook-server-cert" not found Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.447113 4624 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.447213 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:50:59.447192835 +0000 UTC m=+914.111232144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "metrics-server-cert" not found Feb 28 03:50:57 crc kubenswrapper[4624]: W0228 03:50:57.453981 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6722506_d5dd_4fb4_b81a_d27c5dab59dd.slice/crio-600da623f7352d24750e72e8ab7f724ae1d4d4cd3fb5300610ca13c44a7de790 WatchSource:0}: Error finding container 600da623f7352d24750e72e8ab7f724ae1d4d4cd3fb5300610ca13c44a7de790: Status 404 returned error can't find the container with id 600da623f7352d24750e72e8ab7f724ae1d4d4cd3fb5300610ca13c44a7de790 Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.486438 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9vlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-tgr2z_openstack-operators(d6722506-d5dd-4fb4-b81a-d27c5dab59dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.489206 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" podUID="d6722506-d5dd-4fb4-b81a-d27c5dab59dd" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.639431 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" event={"ID":"2ef92f5a-9f82-40fb-81a2-c4a75aec60cf","Type":"ContainerStarted","Data":"558bfc02e4e329746ae24c808c7a84b2ce00167fb9ed06e78755ba42d7161d64"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.640742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" event={"ID":"8507d808-bdf4-47f7-adb9-e3746c4768bf","Type":"ContainerStarted","Data":"0121080d8d97a524ae99ea7cc44caa1dc0461bc0c9b2797e030972b70bf26051"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.653697 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" event={"ID":"3e549f4d-18e0-49cf-a82e-efde664ab810","Type":"ContainerStarted","Data":"7520ed7f32699e16a8297d06513d9a0daf2c21421ec2bf59cf7c07740732688f"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.654802 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" event={"ID":"a5b6c7a0-a640-4faa-836c-7c5d0c29acd9","Type":"ContainerStarted","Data":"3583732c5b5d9686da33f198ef9f6013d8df0c074282bcf7a5ad3929dcd5e651"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.658809 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" event={"ID":"6497616d-eb08-4bd4-b3a0-8ee000cdfe47","Type":"ContainerStarted","Data":"5ff64af81208e4d3260625ee2604d896828a48b3cea4c91366d45a52c8aefbb5"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.660224 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" event={"ID":"f3c08c1c-5646-48e9-9c9a-537b7619ecb0","Type":"ContainerStarted","Data":"50b3fa701df647be8f148c611a29e5121f79fcbb078b72552e04ac23e409f137"} Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.665377 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:14c0fc05afebbccb71f9ac9a6913125154a886b697f21002c77d7d1151e26b8e\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" podUID="f3c08c1c-5646-48e9-9c9a-537b7619ecb0" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.673216 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" event={"ID":"b26b01f4-0d96-4a5b-bb71-58d691b92119","Type":"ContainerStarted","Data":"2d277fffb73a2f6ffb18c807ed108d24b15327ad1262286826179a5e8a454697"} Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.683686 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:66ad64ef3a56951e87fc73c893d08fa3807524876a8c461fb5060a29240bc71d\\\"\"" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" podUID="b26b01f4-0d96-4a5b-bb71-58d691b92119" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.695947 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" event={"ID":"154dfd82-a449-4812-bdd5-3e9c8a474b3d","Type":"ContainerStarted","Data":"5f366ff308dac854f8655c9107cc86affa7dae3750826ceab43ef464d4943cdf"} Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.703045 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" podUID="154dfd82-a449-4812-bdd5-3e9c8a474b3d" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.714734 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" event={"ID":"8cb779fb-ff77-468c-9198-065b3e4bf393","Type":"ContainerStarted","Data":"9604c9ef69e493aee721c8a1b4c46462988661e926ed60713d229018070dd414"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.725659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" event={"ID":"39ee7326-c4c7-4dee-a749-35da4ff62746","Type":"ContainerStarted","Data":"9b06035431d9d95ba2f1c44c6cce5d25b5a9c41d3e96844372850213b57ded61"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.729381 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" event={"ID":"797119fd-2208-40d7-86c8-594e59529182","Type":"ContainerStarted","Data":"837c6bde2982aa2da04e659936c2f7560f3c63c8fd9a77f467007dcd99364aac"} Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.733634 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6f2cb7c21f4c284ce007f6a00ed4ac1e073036e50efae6285c3ee8d3fe1ae5e3\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" podUID="797119fd-2208-40d7-86c8-594e59529182" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.737022 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" event={"ID":"c6a151b1-0add-4b07-aa32-9a9e0dc2f526","Type":"ContainerStarted","Data":"b269ddb9f0df74e1d352a17a342791e386cf879b79525998f6d7cd0b578564c7"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.744110 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" event={"ID":"84190c06-4523-4d3d-ab8c-cec0aca7c393","Type":"ContainerStarted","Data":"87a4f4591d4817ae730bc182a932c30effe765123f5d040391060fe3843aab27"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.750617 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" event={"ID":"195b486b-92db-481a-9478-7a3edfeb79ae","Type":"ContainerStarted","Data":"57325c06406a9d1d04922067fed07ddefc853e04dde44dde568c79c24dd0f6fa"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.758423 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" event={"ID":"582d0963-7f3a-4664-85e4-9148c495eb1a","Type":"ContainerStarted","Data":"fce963ca23dfc7e3e7c25bd0b0695ecd8a28f128051e7b8f2a020e8e1bf8929f"} Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.767862 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" event={"ID":"d6722506-d5dd-4fb4-b81a-d27c5dab59dd","Type":"ContainerStarted","Data":"600da623f7352d24750e72e8ab7f724ae1d4d4cd3fb5300610ca13c44a7de790"} Feb 28 03:50:57 crc kubenswrapper[4624]: E0228 03:50:57.779602 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" podUID="d6722506-d5dd-4fb4-b81a-d27c5dab59dd" Feb 28 03:50:57 crc kubenswrapper[4624]: I0228 03:50:57.783549 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" event={"ID":"5013f14b-e7ba-400b-8a1e-d187991a0e49","Type":"ContainerStarted","Data":"aefd46ab5283efdbd29956a9bda31b201a32e7a68820d1e970c8bc49cb116267"} Feb 28 03:50:58 crc kubenswrapper[4624]: I0228 03:50:58.065419 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.065728 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.065826 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert podName:a96b8e7a-1320-4ede-9f43-ec80e2d562c9 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:02.065802069 +0000 UTC m=+916.729841428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert") pod "infra-operator-controller-manager-c77466965-f8x9g" (UID: "a96b8e7a-1320-4ede-9f43-ec80e2d562c9") : secret "infra-operator-webhook-server-cert" not found Feb 28 03:50:58 crc kubenswrapper[4624]: I0228 03:50:58.679663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.679873 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.679937 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert podName:9776c87a-53fb-404c-8bbe-0fbeb07eda0d nodeName:}" failed. No retries permitted until 2026-02-28 03:51:02.679915421 +0000 UTC m=+917.343954730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" (UID: "9776c87a-53fb-404c-8bbe-0fbeb07eda0d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.799583 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" podUID="d6722506-d5dd-4fb4-b81a-d27c5dab59dd" Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.801113 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6f2cb7c21f4c284ce007f6a00ed4ac1e073036e50efae6285c3ee8d3fe1ae5e3\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" podUID="797119fd-2208-40d7-86c8-594e59529182" Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.801616 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:14c0fc05afebbccb71f9ac9a6913125154a886b697f21002c77d7d1151e26b8e\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" podUID="f3c08c1c-5646-48e9-9c9a-537b7619ecb0" Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.801715 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" podUID="154dfd82-a449-4812-bdd5-3e9c8a474b3d" Feb 28 03:50:58 crc kubenswrapper[4624]: E0228 03:50:58.802595 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:66ad64ef3a56951e87fc73c893d08fa3807524876a8c461fb5060a29240bc71d\\\"\"" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" podUID="b26b01f4-0d96-4a5b-bb71-58d691b92119" Feb 28 03:50:59 crc kubenswrapper[4624]: I0228 03:50:59.494705 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:59 crc kubenswrapper[4624]: I0228 03:50:59.494804 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:50:59 crc kubenswrapper[4624]: E0228 03:50:59.494952 4624 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 03:50:59 crc kubenswrapper[4624]: E0228 03:50:59.495010 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 03:50:59 crc kubenswrapper[4624]: E0228 03:50:59.495042 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:03.495021753 +0000 UTC m=+918.159061062 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "metrics-server-cert" not found Feb 28 03:50:59 crc kubenswrapper[4624]: E0228 03:50:59.495099 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:03.495063474 +0000 UTC m=+918.159102783 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "webhook-server-cert" not found Feb 28 03:51:02 crc kubenswrapper[4624]: I0228 03:51:02.142691 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:51:02 crc kubenswrapper[4624]: E0228 03:51:02.142964 4624 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 03:51:02 crc kubenswrapper[4624]: E0228 03:51:02.143267 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert podName:a96b8e7a-1320-4ede-9f43-ec80e2d562c9 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:10.14324602 +0000 UTC m=+924.807285329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert") pod "infra-operator-controller-manager-c77466965-f8x9g" (UID: "a96b8e7a-1320-4ede-9f43-ec80e2d562c9") : secret "infra-operator-webhook-server-cert" not found Feb 28 03:51:02 crc kubenswrapper[4624]: I0228 03:51:02.752534 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:02 crc kubenswrapper[4624]: E0228 03:51:02.752926 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:51:02 crc kubenswrapper[4624]: E0228 03:51:02.753071 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert podName:9776c87a-53fb-404c-8bbe-0fbeb07eda0d nodeName:}" failed. No retries permitted until 2026-02-28 03:51:10.753040245 +0000 UTC m=+925.417079554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" (UID: "9776c87a-53fb-404c-8bbe-0fbeb07eda0d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:51:03 crc kubenswrapper[4624]: I0228 03:51:03.570967 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:03 crc kubenswrapper[4624]: I0228 03:51:03.571053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:03 crc kubenswrapper[4624]: E0228 03:51:03.571214 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 03:51:03 crc kubenswrapper[4624]: E0228 03:51:03.571216 4624 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 03:51:03 crc kubenswrapper[4624]: E0228 03:51:03.571267 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:11.57125327 +0000 UTC m=+926.235292579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "webhook-server-cert" not found Feb 28 03:51:03 crc kubenswrapper[4624]: E0228 03:51:03.571294 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:11.571275831 +0000 UTC m=+926.235315180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "metrics-server-cert" not found Feb 28 03:51:10 crc kubenswrapper[4624]: I0228 03:51:10.216793 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:51:10 crc kubenswrapper[4624]: I0228 03:51:10.226248 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a96b8e7a-1320-4ede-9f43-ec80e2d562c9-cert\") pod \"infra-operator-controller-manager-c77466965-f8x9g\" (UID: \"a96b8e7a-1320-4ede-9f43-ec80e2d562c9\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:51:10 crc kubenswrapper[4624]: I0228 03:51:10.450307 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:51:10 crc kubenswrapper[4624]: E0228 03:51:10.500215 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27" Feb 28 03:51:10 crc kubenswrapper[4624]: E0228 03:51:10.500569 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8c5cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-lfkwt_openstack-operators(39ee7326-c4c7-4dee-a749-35da4ff62746): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:51:10 crc kubenswrapper[4624]: E0228 03:51:10.501846 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" podUID="39ee7326-c4c7-4dee-a749-35da4ff62746" Feb 28 03:51:10 crc kubenswrapper[4624]: I0228 03:51:10.831947 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:10 crc kubenswrapper[4624]: E0228 03:51:10.832134 4624 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:51:10 crc kubenswrapper[4624]: E0228 03:51:10.832569 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert podName:9776c87a-53fb-404c-8bbe-0fbeb07eda0d nodeName:}" failed. No retries permitted until 2026-02-28 03:51:26.832543288 +0000 UTC m=+941.496582587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" (UID: "9776c87a-53fb-404c-8bbe-0fbeb07eda0d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 03:51:10 crc kubenswrapper[4624]: E0228 03:51:10.929513 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" podUID="39ee7326-c4c7-4dee-a749-35da4ff62746" Feb 28 03:51:11 crc kubenswrapper[4624]: I0228 03:51:11.648515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:11 crc kubenswrapper[4624]: I0228 03:51:11.648593 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:11 crc kubenswrapper[4624]: E0228 03:51:11.648735 4624 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 03:51:11 crc kubenswrapper[4624]: E0228 03:51:11.648751 4624 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 03:51:11 crc kubenswrapper[4624]: E0228 03:51:11.648792 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:27.64877714 +0000 UTC m=+942.312816449 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "webhook-server-cert" not found Feb 28 03:51:11 crc kubenswrapper[4624]: E0228 03:51:11.648847 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs podName:61521bf4-1381-4fe8-a9d3-0948ebaa1ca6 nodeName:}" failed. No retries permitted until 2026-02-28 03:51:27.648823651 +0000 UTC m=+942.312863050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs") pod "openstack-operator-controller-manager-6c5dbcf94c-psgpc" (UID: "61521bf4-1381-4fe8-a9d3-0948ebaa1ca6") : secret "metrics-server-cert" not found Feb 28 03:51:12 crc kubenswrapper[4624]: E0228 03:51:12.653107 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9a940ee50452c206923805ba7bf69dded7fcf53cb7ec14e22e793bd56501e242" Feb 28 03:51:12 crc kubenswrapper[4624]: E0228 03:51:12.653522 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9a940ee50452c206923805ba7bf69dded7fcf53cb7ec14e22e793bd56501e242,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2hs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-65c9f4f6b-7w84p_openstack-operators(195b486b-92db-481a-9478-7a3edfeb79ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:51:12 crc kubenswrapper[4624]: E0228 03:51:12.654841 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" podUID="195b486b-92db-481a-9478-7a3edfeb79ae" Feb 28 03:51:12 crc kubenswrapper[4624]: E0228 03:51:12.943562 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9a940ee50452c206923805ba7bf69dded7fcf53cb7ec14e22e793bd56501e242\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" podUID="195b486b-92db-481a-9478-7a3edfeb79ae" Feb 28 03:51:14 crc kubenswrapper[4624]: E0228 03:51:14.166351 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:e680ca7e34c878e662ef083db67cd3e3650bfacd859ec56bee95c5a39cc424a2" Feb 28 03:51:14 crc kubenswrapper[4624]: E0228 03:51:14.168266 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:e680ca7e34c878e662ef083db67cd3e3650bfacd859ec56bee95c5a39cc424a2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2tjg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-7db95d7ffb-k68gx_openstack-operators(5013f14b-e7ba-400b-8a1e-d187991a0e49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:51:14 crc kubenswrapper[4624]: E0228 03:51:14.169721 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" podUID="5013f14b-e7ba-400b-8a1e-d187991a0e49" Feb 28 03:51:14 crc kubenswrapper[4624]: E0228 03:51:14.956471 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:e680ca7e34c878e662ef083db67cd3e3650bfacd859ec56bee95c5a39cc424a2\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" podUID="5013f14b-e7ba-400b-8a1e-d187991a0e49" Feb 28 03:51:17 crc kubenswrapper[4624]: E0228 03:51:17.347658 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:b235599fe44c901b7ac0b51dfbcc9e0cea2bf5a9dc8295bafe16bba528d72997" Feb 28 03:51:17 crc kubenswrapper[4624]: E0228 03:51:17.349311 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:b235599fe44c901b7ac0b51dfbcc9e0cea2bf5a9dc8295bafe16bba528d72997,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq25z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-78b64779b9-rvz6s_openstack-operators(84190c06-4523-4d3d-ab8c-cec0aca7c393): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:51:17 crc kubenswrapper[4624]: E0228 03:51:17.350724 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" podUID="84190c06-4523-4d3d-ab8c-cec0aca7c393" Feb 28 03:51:17 crc kubenswrapper[4624]: E0228 03:51:17.988183 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:b235599fe44c901b7ac0b51dfbcc9e0cea2bf5a9dc8295bafe16bba528d72997\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" podUID="84190c06-4523-4d3d-ab8c-cec0aca7c393" Feb 28 03:51:20 crc kubenswrapper[4624]: I0228 03:51:20.230249 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:51:20 crc kubenswrapper[4624]: I0228 03:51:20.230298 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:51:20 crc kubenswrapper[4624]: E0228 03:51:20.315283 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:d2b850bc2ec026f8a179d5f59ad65b79f2d329e91a4ec8f140a645ebc38069b6" Feb 28 03:51:20 crc kubenswrapper[4624]: E0228 03:51:20.318716 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:d2b850bc2ec026f8a179d5f59ad65b79f2d329e91a4ec8f140a645ebc38069b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qfccl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-768f998cf4-dv9vf_openstack-operators(6497616d-eb08-4bd4-b3a0-8ee000cdfe47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:51:20 crc kubenswrapper[4624]: E0228 03:51:20.319996 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" podUID="6497616d-eb08-4bd4-b3a0-8ee000cdfe47" Feb 28 03:51:21 crc kubenswrapper[4624]: E0228 03:51:21.394986 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:d2b850bc2ec026f8a179d5f59ad65b79f2d329e91a4ec8f140a645ebc38069b6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" podUID="6497616d-eb08-4bd4-b3a0-8ee000cdfe47" Feb 28 03:51:21 crc kubenswrapper[4624]: E0228 03:51:21.640121 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:b61730aa07404c6893c94c73cb7c80f16eb4d92a759740393430aca41f416b28" Feb 28 03:51:21 crc kubenswrapper[4624]: E0228 03:51:21.640374 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:b61730aa07404c6893c94c73cb7c80f16eb4d92a759740393430aca41f416b28,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99hsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-bff955cc4-x8vll_openstack-operators(582d0963-7f3a-4664-85e4-9148c495eb1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:51:21 crc kubenswrapper[4624]: E0228 03:51:21.641582 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" podUID="582d0963-7f3a-4664-85e4-9148c495eb1a" Feb 28 03:51:22 crc kubenswrapper[4624]: E0228 03:51:22.405894 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:b61730aa07404c6893c94c73cb7c80f16eb4d92a759740393430aca41f416b28\\\"\"" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" podUID="582d0963-7f3a-4664-85e4-9148c495eb1a" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.159724 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qflrx"] Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.162280 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.177728 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflrx"] Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.226557 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-catalog-content\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.226639 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-utilities\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.226814 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgsr\" (UniqueName: \"kubernetes.io/projected/c5946fb8-c868-4096-a60d-90fb78e05f88-kube-api-access-bmgsr\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.328686 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmgsr\" (UniqueName: \"kubernetes.io/projected/c5946fb8-c868-4096-a60d-90fb78e05f88-kube-api-access-bmgsr\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.328789 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-catalog-content\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.328837 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-utilities\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.329457 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-catalog-content\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.329546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-utilities\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.376961 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmgsr\" (UniqueName: \"kubernetes.io/projected/c5946fb8-c868-4096-a60d-90fb78e05f88-kube-api-access-bmgsr\") pod \"redhat-marketplace-qflrx\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:25 crc kubenswrapper[4624]: I0228 03:51:25.490264 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:26 crc kubenswrapper[4624]: I0228 03:51:26.854364 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:26 crc kubenswrapper[4624]: I0228 03:51:26.871421 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9776c87a-53fb-404c-8bbe-0fbeb07eda0d-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67\" (UID: \"9776c87a-53fb-404c-8bbe-0fbeb07eda0d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:26 crc kubenswrapper[4624]: I0228 03:51:26.876215 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:27 crc kubenswrapper[4624]: I0228 03:51:27.451277 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-c77466965-f8x9g"] Feb 28 03:51:27 crc kubenswrapper[4624]: I0228 03:51:27.670695 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:27 crc kubenswrapper[4624]: I0228 03:51:27.670814 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:27 crc kubenswrapper[4624]: I0228 03:51:27.674893 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-webhook-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:27 crc kubenswrapper[4624]: I0228 03:51:27.684475 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61521bf4-1381-4fe8-a9d3-0948ebaa1ca6-metrics-certs\") pod \"openstack-operator-controller-manager-6c5dbcf94c-psgpc\" (UID: \"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6\") " pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:27 crc kubenswrapper[4624]: I0228 03:51:27.901041 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.546906 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" event={"ID":"4833820c-a44e-4eb4-8716-bab85def7811","Type":"ContainerStarted","Data":"c00a2bc0f4a28eb0cbe245f3f96d5e8e8beabe3ce7882e5a5095be2319b78075"} Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.547924 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.558449 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" event={"ID":"a96b8e7a-1320-4ede-9f43-ec80e2d562c9","Type":"ContainerStarted","Data":"e275fe2d5c8b612128b3a2b15868e798afb27e4157757b078651bdb1a0574e45"} Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.617841 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" podStartSLOduration=7.228344892 podStartE2EDuration="35.6178172s" podCreationTimestamp="2026-02-28 03:50:53 +0000 UTC" firstStartedPulling="2026-02-28 03:50:56.088707413 +0000 UTC m=+910.752746722" lastFinishedPulling="2026-02-28 03:51:24.478179721 +0000 UTC m=+939.142219030" observedRunningTime="2026-02-28 03:51:28.616741402 +0000 UTC m=+943.280780711" watchObservedRunningTime="2026-02-28 03:51:28.6178172 +0000 UTC m=+943.281856519" Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.625475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" event={"ID":"a5b6c7a0-a640-4faa-836c-7c5d0c29acd9","Type":"ContainerStarted","Data":"37271002c373c552a68a0cd7fb056a616b9c554302eaa7a5b320fc7f63148eeb"} Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.625809 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.663803 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" podStartSLOduration=8.545633124 podStartE2EDuration="35.663784159s" podCreationTimestamp="2026-02-28 03:50:53 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.359610115 +0000 UTC m=+912.023649424" lastFinishedPulling="2026-02-28 03:51:24.47776115 +0000 UTC m=+939.141800459" observedRunningTime="2026-02-28 03:51:28.657816987 +0000 UTC m=+943.321856296" watchObservedRunningTime="2026-02-28 03:51:28.663784159 +0000 UTC m=+943.327823468" Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.735701 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflrx"] Feb 28 03:51:28 crc kubenswrapper[4624]: I0228 03:51:28.986462 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67"] Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.155407 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc"] Feb 28 03:51:29 crc kubenswrapper[4624]: W0228 03:51:29.198444 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61521bf4_1381_4fe8_a9d3_0948ebaa1ca6.slice/crio-b591f2d94db4bd2c6e0839ca4554432d89981b158811feac48eac4f9bbb1ba11 WatchSource:0}: Error finding container b591f2d94db4bd2c6e0839ca4554432d89981b158811feac48eac4f9bbb1ba11: Status 404 returned error can't find the container with id b591f2d94db4bd2c6e0839ca4554432d89981b158811feac48eac4f9bbb1ba11 Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.647669 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" event={"ID":"d6722506-d5dd-4fb4-b81a-d27c5dab59dd","Type":"ContainerStarted","Data":"53ed4118cdf7bc442a0f78c1106ed7f82c83026ec09bb3888f25d48c7dbe3f92"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.648383 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.660412 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" event={"ID":"8cb779fb-ff77-468c-9198-065b3e4bf393","Type":"ContainerStarted","Data":"47be6affcda19a4f53952b524c890e009466a851b7af10582b8e6f0bf4108503"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.664199 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.681734 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" event={"ID":"8507d808-bdf4-47f7-adb9-e3746c4768bf","Type":"ContainerStarted","Data":"4d69f4f132a84986011a3b30f8e43967a436543e9b317e9af0f6ddee0edb05eb"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.681879 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.698394 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" podStartSLOduration=5.185141155 podStartE2EDuration="35.698380023s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.486282247 +0000 UTC m=+912.150321556" lastFinishedPulling="2026-02-28 03:51:27.999521105 +0000 UTC m=+942.663560424" observedRunningTime="2026-02-28 03:51:29.678561875 +0000 UTC m=+944.342601184" watchObservedRunningTime="2026-02-28 03:51:29.698380023 +0000 UTC m=+944.362419332" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.703875 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" event={"ID":"9776c87a-53fb-404c-8bbe-0fbeb07eda0d","Type":"ContainerStarted","Data":"41ffdc8c8cf4beea76b4bb6126096f6e886aff61c9a895a86606e2bbeefb13f1"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.739728 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" podStartSLOduration=9.36835768 podStartE2EDuration="35.739704216s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.361329203 +0000 UTC m=+912.025368512" lastFinishedPulling="2026-02-28 03:51:23.732675739 +0000 UTC m=+938.396715048" observedRunningTime="2026-02-28 03:51:29.730579218 +0000 UTC m=+944.394618527" watchObservedRunningTime="2026-02-28 03:51:29.739704216 +0000 UTC m=+944.403743525" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.755328 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerStarted","Data":"618d9aa783e9db89105d104e94b63112de8fe94d588e4f6a60bc4faad2b848eb"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.794920 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" podStartSLOduration=8.678086827 podStartE2EDuration="35.794897835s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.360970002 +0000 UTC m=+912.025009311" lastFinishedPulling="2026-02-28 03:51:24.47778101 +0000 UTC m=+939.141820319" observedRunningTime="2026-02-28 03:51:29.780265278 +0000 UTC m=+944.444304587" watchObservedRunningTime="2026-02-28 03:51:29.794897835 +0000 UTC m=+944.458937144" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.801929 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" event={"ID":"195b486b-92db-481a-9478-7a3edfeb79ae","Type":"ContainerStarted","Data":"e32aea374162c150fe7436149eff05ab0824e9165f32e2933ab1ea9331272d19"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.803538 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.818147 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" event={"ID":"50df7aca-97ff-41dc-92cc-143cb02acea8","Type":"ContainerStarted","Data":"62605b0c632928b11cb25a3e22a631c1474405902d76009fabd90188a9e1b2d2"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.820065 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.834919 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" podStartSLOduration=5.161583956 podStartE2EDuration="35.834900592s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.385070008 +0000 UTC m=+912.049109317" lastFinishedPulling="2026-02-28 03:51:28.058386634 +0000 UTC m=+942.722425953" observedRunningTime="2026-02-28 03:51:29.831105559 +0000 UTC m=+944.495144868" watchObservedRunningTime="2026-02-28 03:51:29.834900592 +0000 UTC m=+944.498939901" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.857997 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" event={"ID":"2ef92f5a-9f82-40fb-81a2-c4a75aec60cf","Type":"ContainerStarted","Data":"fd6d09fcc3aaf74eb07d52451da30b96c473d21d180d6a09c9eb75ff8386460e"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.858120 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.864657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" event={"ID":"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6","Type":"ContainerStarted","Data":"b591f2d94db4bd2c6e0839ca4554432d89981b158811feac48eac4f9bbb1ba11"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.913462 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" event={"ID":"3e549f4d-18e0-49cf-a82e-efde664ab810","Type":"ContainerStarted","Data":"c0ccf6930e3b761145b697e9ea939a0e6cf003fbcd97fcbb816906ce27101de7"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.913604 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.918446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" event={"ID":"f62e258f-732a-4da1-8670-475725509310","Type":"ContainerStarted","Data":"194e7ade47d0ac2a1a57dbf916154cc0082c78dc47fabe26ca3ae4092a7af5eb"} Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.919747 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.919803 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" podStartSLOduration=9.676732282 podStartE2EDuration="36.919785047s" podCreationTimestamp="2026-02-28 03:50:53 +0000 UTC" firstStartedPulling="2026-02-28 03:50:56.490156749 +0000 UTC m=+911.154196048" lastFinishedPulling="2026-02-28 03:51:23.733209514 +0000 UTC m=+938.397248813" observedRunningTime="2026-02-28 03:51:29.876413549 +0000 UTC m=+944.540452858" watchObservedRunningTime="2026-02-28 03:51:29.919785047 +0000 UTC m=+944.583824356" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.952277 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" podStartSLOduration=8.828303008 podStartE2EDuration="35.952256s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:56.607469914 +0000 UTC m=+911.271509223" lastFinishedPulling="2026-02-28 03:51:23.731422906 +0000 UTC m=+938.395462215" observedRunningTime="2026-02-28 03:51:29.924516176 +0000 UTC m=+944.588555475" watchObservedRunningTime="2026-02-28 03:51:29.952256 +0000 UTC m=+944.616295309" Feb 28 03:51:29 crc kubenswrapper[4624]: I0228 03:51:29.965686 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" podStartSLOduration=9.617058953 podStartE2EDuration="35.965653863s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.384704437 +0000 UTC m=+912.048743746" lastFinishedPulling="2026-02-28 03:51:23.733299307 +0000 UTC m=+938.397338656" observedRunningTime="2026-02-28 03:51:29.950642365 +0000 UTC m=+944.614681674" watchObservedRunningTime="2026-02-28 03:51:29.965653863 +0000 UTC m=+944.629693172" Feb 28 03:51:30 crc kubenswrapper[4624]: I0228 03:51:30.002698 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" podStartSLOduration=9.132243501 podStartE2EDuration="37.002664999s" podCreationTimestamp="2026-02-28 03:50:53 +0000 UTC" firstStartedPulling="2026-02-28 03:50:55.862862038 +0000 UTC m=+910.526901347" lastFinishedPulling="2026-02-28 03:51:23.733283536 +0000 UTC m=+938.397322845" observedRunningTime="2026-02-28 03:51:30.001423066 +0000 UTC m=+944.665462375" watchObservedRunningTime="2026-02-28 03:51:30.002664999 +0000 UTC m=+944.666704308" Feb 28 03:51:32 crc kubenswrapper[4624]: I0228 03:51:32.977169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" event={"ID":"39ee7326-c4c7-4dee-a749-35da4ff62746","Type":"ContainerStarted","Data":"86ac1f475b01a6740e01758bee0db4277294c3718b68712f37b9b26aff3f71ea"} Feb 28 03:51:32 crc kubenswrapper[4624]: I0228 03:51:32.979929 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.008064 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" event={"ID":"61521bf4-1381-4fe8-a9d3-0948ebaa1ca6","Type":"ContainerStarted","Data":"d0f5f58cd62a138ffd3aad947855281638156d53d460732230be4d592e42ebc5"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.008500 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.034787 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerStarted","Data":"d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.037946 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" podStartSLOduration=8.375642463 podStartE2EDuration="39.037913079s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.40722946 +0000 UTC m=+912.071268769" lastFinishedPulling="2026-02-28 03:51:28.069500056 +0000 UTC m=+942.733539385" observedRunningTime="2026-02-28 03:51:33.020580368 +0000 UTC m=+947.684619677" watchObservedRunningTime="2026-02-28 03:51:33.037913079 +0000 UTC m=+947.701952388" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.053728 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" event={"ID":"c6a151b1-0add-4b07-aa32-9a9e0dc2f526","Type":"ContainerStarted","Data":"3a851ae019fbbc5498dd9b39cf478a0bad840b15ae6e90d9c10e94d575e73cb3"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.054134 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.064921 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" event={"ID":"154dfd82-a449-4812-bdd5-3e9c8a474b3d","Type":"ContainerStarted","Data":"cf9a2ac195aa6b4e499461b433e5553c2fb31724e4b14d38885abe0bd107bf64"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.088823 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" podStartSLOduration=38.088802302 podStartE2EDuration="38.088802302s" podCreationTimestamp="2026-02-28 03:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:51:33.081528004 +0000 UTC m=+947.745567323" watchObservedRunningTime="2026-02-28 03:51:33.088802302 +0000 UTC m=+947.752841611" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.092076 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" event={"ID":"5013f14b-e7ba-400b-8a1e-d187991a0e49","Type":"ContainerStarted","Data":"dc42267ab45fe61d0975471e36c6130e6dd026341d94689df6ac08943a578805"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.092920 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.097975 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" event={"ID":"f3c08c1c-5646-48e9-9c9a-537b7619ecb0","Type":"ContainerStarted","Data":"56de18ca88c8e561642decd9d5a72be147dbb382ceeac4c319f79935a6998225"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.099501 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.109155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" event={"ID":"b26b01f4-0d96-4a5b-bb71-58d691b92119","Type":"ContainerStarted","Data":"93170c83804552931c922b82e76240a24e3dd25d8c61bcedcfdedacc6cb4bb49"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.109458 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.138989 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" podStartSLOduration=12.612084372 podStartE2EDuration="39.138966245s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.204674337 +0000 UTC m=+911.868713646" lastFinishedPulling="2026-02-28 03:51:23.73155617 +0000 UTC m=+938.395595519" observedRunningTime="2026-02-28 03:51:33.120712648 +0000 UTC m=+947.784751957" watchObservedRunningTime="2026-02-28 03:51:33.138966245 +0000 UTC m=+947.803005554" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.151341 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" event={"ID":"797119fd-2208-40d7-86c8-594e59529182","Type":"ContainerStarted","Data":"4f0dab159ccf6336137d10351527219f84e0073e809479aa378dbac066609885"} Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.152189 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.162845 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7d47r" podStartSLOduration=7.320831044 podStartE2EDuration="38.162827383s" podCreationTimestamp="2026-02-28 03:50:55 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.429278398 +0000 UTC m=+912.093317707" lastFinishedPulling="2026-02-28 03:51:28.271274737 +0000 UTC m=+942.935314046" observedRunningTime="2026-02-28 03:51:33.157491698 +0000 UTC m=+947.821531007" watchObservedRunningTime="2026-02-28 03:51:33.162827383 +0000 UTC m=+947.826866692" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.255324 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" podStartSLOduration=8.600551293 podStartE2EDuration="39.255300145s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.407523828 +0000 UTC m=+912.071563137" lastFinishedPulling="2026-02-28 03:51:28.06227267 +0000 UTC m=+942.726311989" observedRunningTime="2026-02-28 03:51:33.24375357 +0000 UTC m=+947.907792889" watchObservedRunningTime="2026-02-28 03:51:33.255300145 +0000 UTC m=+947.919339444" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.279064 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" podStartSLOduration=8.624079312 podStartE2EDuration="39.27904736s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.40797372 +0000 UTC m=+912.072013029" lastFinishedPulling="2026-02-28 03:51:28.062941748 +0000 UTC m=+942.726981077" observedRunningTime="2026-02-28 03:51:33.270982911 +0000 UTC m=+947.935022220" watchObservedRunningTime="2026-02-28 03:51:33.27904736 +0000 UTC m=+947.943086669" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.336358 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" podStartSLOduration=8.762929432 podStartE2EDuration="39.336338675s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.426225265 +0000 UTC m=+912.090264574" lastFinishedPulling="2026-02-28 03:51:27.999634488 +0000 UTC m=+942.663673817" observedRunningTime="2026-02-28 03:51:33.308550451 +0000 UTC m=+947.972589770" watchObservedRunningTime="2026-02-28 03:51:33.336338675 +0000 UTC m=+948.000377984" Feb 28 03:51:33 crc kubenswrapper[4624]: I0228 03:51:33.338017 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" podStartSLOduration=8.612793124 podStartE2EDuration="39.338010321s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.360718976 +0000 UTC m=+912.024758285" lastFinishedPulling="2026-02-28 03:51:28.085936163 +0000 UTC m=+942.749975482" observedRunningTime="2026-02-28 03:51:33.335600206 +0000 UTC m=+947.999639525" watchObservedRunningTime="2026-02-28 03:51:33.338010321 +0000 UTC m=+948.002049630" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.167653 4624 generic.go:334] "Generic (PLEG): container finished" podID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerID="d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf" exitCode=0 Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.167780 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerDied","Data":"d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf"} Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.343772 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-5rdgn" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.560571 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-999d845f-jrsj4" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.575355 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-8vq2d" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.587975 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-gkxmg" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.704508 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-76fd76856-knmpc" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.719553 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-vdhdb" Feb 28 03:51:34 crc kubenswrapper[4624]: I0228 03:51:34.793641 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-585b788787-b97cm" Feb 28 03:51:35 crc kubenswrapper[4624]: I0228 03:51:35.270186 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-xw2s7" Feb 28 03:51:35 crc kubenswrapper[4624]: I0228 03:51:35.576740 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-tgr2z" Feb 28 03:51:35 crc kubenswrapper[4624]: I0228 03:51:35.835806 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-7w84p" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.201718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" event={"ID":"582d0963-7f3a-4664-85e4-9148c495eb1a","Type":"ContainerStarted","Data":"dec38670fb5b9605d008e28133bc5f660cc8c2dc794ae37c6210d4b82ccfd0bc"} Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.202245 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.203467 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" event={"ID":"6497616d-eb08-4bd4-b3a0-8ee000cdfe47","Type":"ContainerStarted","Data":"380e493db3ec7cbee8ac974544dfe89476d20a26236fa87794aebc5a54b4a76d"} Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.203676 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.205255 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" event={"ID":"84190c06-4523-4d3d-ab8c-cec0aca7c393","Type":"ContainerStarted","Data":"665f14536d44e1bd5c146302d581b5df1e76d4432c87bdca64ad91a3154dc90d"} Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.205433 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.207675 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" event={"ID":"9776c87a-53fb-404c-8bbe-0fbeb07eda0d","Type":"ContainerStarted","Data":"7ab728d83b0d6d974e5aa105e58a35f18765a9257f4b0a27df0bea0db0cc746f"} Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.207862 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.210109 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerStarted","Data":"8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80"} Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.211656 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" event={"ID":"a96b8e7a-1320-4ede-9f43-ec80e2d562c9","Type":"ContainerStarted","Data":"e5e21b0b004ecbb1fb0680f8fdf3b19edd6afc4b75f1c5c5d244775e7504aabf"} Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.211772 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.231592 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" podStartSLOduration=3.7648725560000003 podStartE2EDuration="43.231573627s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.239579646 +0000 UTC m=+911.903618955" lastFinishedPulling="2026-02-28 03:51:36.706280717 +0000 UTC m=+951.370320026" observedRunningTime="2026-02-28 03:51:37.227733793 +0000 UTC m=+951.891773102" watchObservedRunningTime="2026-02-28 03:51:37.231573627 +0000 UTC m=+951.895612936" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.298676 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" podStartSLOduration=35.702045731 podStartE2EDuration="43.298653918s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:51:29.093854151 +0000 UTC m=+943.757893460" lastFinishedPulling="2026-02-28 03:51:36.690462338 +0000 UTC m=+951.354501647" observedRunningTime="2026-02-28 03:51:37.292278626 +0000 UTC m=+951.956317935" watchObservedRunningTime="2026-02-28 03:51:37.298653918 +0000 UTC m=+951.962693227" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.388050 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" podStartSLOduration=3.9241798919999997 podStartE2EDuration="43.388023826s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.242022332 +0000 UTC m=+911.906061641" lastFinishedPulling="2026-02-28 03:51:36.705866266 +0000 UTC m=+951.369905575" observedRunningTime="2026-02-28 03:51:37.384627284 +0000 UTC m=+952.048666593" watchObservedRunningTime="2026-02-28 03:51:37.388023826 +0000 UTC m=+952.052063135" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.507953 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" podStartSLOduration=4.090915992 podStartE2EDuration="43.507934954s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:50:57.266806695 +0000 UTC m=+911.930846004" lastFinishedPulling="2026-02-28 03:51:36.683825657 +0000 UTC m=+951.347864966" observedRunningTime="2026-02-28 03:51:37.470664431 +0000 UTC m=+952.134703740" watchObservedRunningTime="2026-02-28 03:51:37.507934954 +0000 UTC m=+952.171974263" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.511365 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" podStartSLOduration=34.824962217 podStartE2EDuration="43.511357017s" podCreationTimestamp="2026-02-28 03:50:54 +0000 UTC" firstStartedPulling="2026-02-28 03:51:28.018243323 +0000 UTC m=+942.682282652" lastFinishedPulling="2026-02-28 03:51:36.704638143 +0000 UTC m=+951.368677452" observedRunningTime="2026-02-28 03:51:37.508365095 +0000 UTC m=+952.172404404" watchObservedRunningTime="2026-02-28 03:51:37.511357017 +0000 UTC m=+952.175396326" Feb 28 03:51:37 crc kubenswrapper[4624]: I0228 03:51:37.903295 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c5dbcf94c-psgpc" Feb 28 03:51:38 crc kubenswrapper[4624]: I0228 03:51:38.221669 4624 generic.go:334] "Generic (PLEG): container finished" podID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerID="8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80" exitCode=0 Feb 28 03:51:38 crc kubenswrapper[4624]: I0228 03:51:38.221875 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerDied","Data":"8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80"} Feb 28 03:51:39 crc kubenswrapper[4624]: I0228 03:51:39.231197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerStarted","Data":"075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7"} Feb 28 03:51:39 crc kubenswrapper[4624]: I0228 03:51:39.255636 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qflrx" podStartSLOduration=11.097503571 podStartE2EDuration="14.255616928s" podCreationTimestamp="2026-02-28 03:51:25 +0000 UTC" firstStartedPulling="2026-02-28 03:51:35.679046304 +0000 UTC m=+950.343085613" lastFinishedPulling="2026-02-28 03:51:38.837159661 +0000 UTC m=+953.501198970" observedRunningTime="2026-02-28 03:51:39.252312219 +0000 UTC m=+953.916351528" watchObservedRunningTime="2026-02-28 03:51:39.255616928 +0000 UTC m=+953.919656237" Feb 28 03:51:44 crc kubenswrapper[4624]: I0228 03:51:44.665852 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-rvz6s" Feb 28 03:51:44 crc kubenswrapper[4624]: I0228 03:51:44.783159 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-tvr7t" Feb 28 03:51:44 crc kubenswrapper[4624]: I0228 03:51:44.815246 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-k68gx" Feb 28 03:51:44 crc kubenswrapper[4624]: I0228 03:51:44.841920 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-dv9vf" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.199794 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-psffj" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.309974 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-x8vll" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.418183 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-c6gww" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.442316 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-54l7x" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.479997 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-lfkwt" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.491574 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.491630 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:45 crc kubenswrapper[4624]: I0228 03:51:45.555207 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:46 crc kubenswrapper[4624]: I0228 03:51:46.350172 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:46 crc kubenswrapper[4624]: I0228 03:51:46.410504 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflrx"] Feb 28 03:51:46 crc kubenswrapper[4624]: I0228 03:51:46.890242 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.317769 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qflrx" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="registry-server" containerID="cri-o://075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7" gracePeriod=2 Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.736279 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.817175 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-catalog-content\") pod \"c5946fb8-c868-4096-a60d-90fb78e05f88\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.817308 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmgsr\" (UniqueName: \"kubernetes.io/projected/c5946fb8-c868-4096-a60d-90fb78e05f88-kube-api-access-bmgsr\") pod \"c5946fb8-c868-4096-a60d-90fb78e05f88\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.817447 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-utilities\") pod \"c5946fb8-c868-4096-a60d-90fb78e05f88\" (UID: \"c5946fb8-c868-4096-a60d-90fb78e05f88\") " Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.819640 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-utilities" (OuterVolumeSpecName: "utilities") pod "c5946fb8-c868-4096-a60d-90fb78e05f88" (UID: "c5946fb8-c868-4096-a60d-90fb78e05f88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.823669 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5946fb8-c868-4096-a60d-90fb78e05f88-kube-api-access-bmgsr" (OuterVolumeSpecName: "kube-api-access-bmgsr") pod "c5946fb8-c868-4096-a60d-90fb78e05f88" (UID: "c5946fb8-c868-4096-a60d-90fb78e05f88"). InnerVolumeSpecName "kube-api-access-bmgsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.854026 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5946fb8-c868-4096-a60d-90fb78e05f88" (UID: "c5946fb8-c868-4096-a60d-90fb78e05f88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.919964 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.920017 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmgsr\" (UniqueName: \"kubernetes.io/projected/c5946fb8-c868-4096-a60d-90fb78e05f88-kube-api-access-bmgsr\") on node \"crc\" DevicePath \"\"" Feb 28 03:51:48 crc kubenswrapper[4624]: I0228 03:51:48.920033 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5946fb8-c868-4096-a60d-90fb78e05f88-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.329462 4624 generic.go:334] "Generic (PLEG): container finished" podID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerID="075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7" exitCode=0 Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.329518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerDied","Data":"075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7"} Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.329565 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflrx" event={"ID":"c5946fb8-c868-4096-a60d-90fb78e05f88","Type":"ContainerDied","Data":"618d9aa783e9db89105d104e94b63112de8fe94d588e4f6a60bc4faad2b848eb"} Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.329589 4624 scope.go:117] "RemoveContainer" containerID="075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.330689 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflrx" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.363142 4624 scope.go:117] "RemoveContainer" containerID="8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.366296 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflrx"] Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.376265 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflrx"] Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.409990 4624 scope.go:117] "RemoveContainer" containerID="d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.438474 4624 scope.go:117] "RemoveContainer" containerID="075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7" Feb 28 03:51:49 crc kubenswrapper[4624]: E0228 03:51:49.443408 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7\": container with ID starting with 075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7 not found: ID does not exist" containerID="075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.443481 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7"} err="failed to get container status \"075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7\": rpc error: code = NotFound desc = could not find container \"075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7\": container with ID starting with 075024935c56afa27aff90111ab672748e6cb8055142c2dd2bfa233ac9390cb7 not found: ID does not exist" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.443520 4624 scope.go:117] "RemoveContainer" containerID="8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80" Feb 28 03:51:49 crc kubenswrapper[4624]: E0228 03:51:49.444655 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80\": container with ID starting with 8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80 not found: ID does not exist" containerID="8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.444691 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80"} err="failed to get container status \"8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80\": rpc error: code = NotFound desc = could not find container \"8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80\": container with ID starting with 8995ecd779d20703de42b29f15c596687fa7c20eae4830f9b4affb1d5bdb2c80 not found: ID does not exist" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.444713 4624 scope.go:117] "RemoveContainer" containerID="d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf" Feb 28 03:51:49 crc kubenswrapper[4624]: E0228 03:51:49.445021 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf\": container with ID starting with d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf not found: ID does not exist" containerID="d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.445099 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf"} err="failed to get container status \"d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf\": rpc error: code = NotFound desc = could not find container \"d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf\": container with ID starting with d5c9b5d53562e41ab2f27407f7b35d45709a9d2234741b84603f420fe4a865cf not found: ID does not exist" Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.540523 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:51:49 crc kubenswrapper[4624]: I0228 03:51:49.540589 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:51:50 crc kubenswrapper[4624]: I0228 03:51:50.097992 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" path="/var/lib/kubelet/pods/c5946fb8-c868-4096-a60d-90fb78e05f88/volumes" Feb 28 03:51:50 crc kubenswrapper[4624]: I0228 03:51:50.462469 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-c77466965-f8x9g" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.163254 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537512-gk4pk"] Feb 28 03:52:00 crc kubenswrapper[4624]: E0228 03:52:00.164338 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="extract-content" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.164360 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="extract-content" Feb 28 03:52:00 crc kubenswrapper[4624]: E0228 03:52:00.164394 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="extract-utilities" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.164409 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="extract-utilities" Feb 28 03:52:00 crc kubenswrapper[4624]: E0228 03:52:00.164440 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="registry-server" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.164455 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="registry-server" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.164699 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5946fb8-c868-4096-a60d-90fb78e05f88" containerName="registry-server" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.165553 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.168484 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.171293 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.179375 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.190347 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537512-gk4pk"] Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.202259 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swrs7\" (UniqueName: \"kubernetes.io/projected/f5f0bdad-f857-4411-a897-baf63edc11b3-kube-api-access-swrs7\") pod \"auto-csr-approver-29537512-gk4pk\" (UID: \"f5f0bdad-f857-4411-a897-baf63edc11b3\") " pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.303671 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swrs7\" (UniqueName: \"kubernetes.io/projected/f5f0bdad-f857-4411-a897-baf63edc11b3-kube-api-access-swrs7\") pod \"auto-csr-approver-29537512-gk4pk\" (UID: \"f5f0bdad-f857-4411-a897-baf63edc11b3\") " pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.335635 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swrs7\" (UniqueName: \"kubernetes.io/projected/f5f0bdad-f857-4411-a897-baf63edc11b3-kube-api-access-swrs7\") pod \"auto-csr-approver-29537512-gk4pk\" (UID: \"f5f0bdad-f857-4411-a897-baf63edc11b3\") " pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.492753 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:00 crc kubenswrapper[4624]: I0228 03:52:00.747672 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537512-gk4pk"] Feb 28 03:52:00 crc kubenswrapper[4624]: W0228 03:52:00.750368 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5f0bdad_f857_4411_a897_baf63edc11b3.slice/crio-df0d680292fc53d9ce1f412253a4c64bc7bc94de352768dd2c720278c748ee1a WatchSource:0}: Error finding container df0d680292fc53d9ce1f412253a4c64bc7bc94de352768dd2c720278c748ee1a: Status 404 returned error can't find the container with id df0d680292fc53d9ce1f412253a4c64bc7bc94de352768dd2c720278c748ee1a Feb 28 03:52:01 crc kubenswrapper[4624]: I0228 03:52:01.434330 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" event={"ID":"f5f0bdad-f857-4411-a897-baf63edc11b3","Type":"ContainerStarted","Data":"df0d680292fc53d9ce1f412253a4c64bc7bc94de352768dd2c720278c748ee1a"} Feb 28 03:52:02 crc kubenswrapper[4624]: I0228 03:52:02.443871 4624 generic.go:334] "Generic (PLEG): container finished" podID="f5f0bdad-f857-4411-a897-baf63edc11b3" containerID="3bd5875b8feee7fa802c0aa6653df90bfd201a3bf80a047fe55aa1467d50327f" exitCode=0 Feb 28 03:52:02 crc kubenswrapper[4624]: I0228 03:52:02.444670 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" event={"ID":"f5f0bdad-f857-4411-a897-baf63edc11b3","Type":"ContainerDied","Data":"3bd5875b8feee7fa802c0aa6653df90bfd201a3bf80a047fe55aa1467d50327f"} Feb 28 03:52:03 crc kubenswrapper[4624]: I0228 03:52:03.818921 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:03 crc kubenswrapper[4624]: I0228 03:52:03.998163 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swrs7\" (UniqueName: \"kubernetes.io/projected/f5f0bdad-f857-4411-a897-baf63edc11b3-kube-api-access-swrs7\") pod \"f5f0bdad-f857-4411-a897-baf63edc11b3\" (UID: \"f5f0bdad-f857-4411-a897-baf63edc11b3\") " Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.004872 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f0bdad-f857-4411-a897-baf63edc11b3-kube-api-access-swrs7" (OuterVolumeSpecName: "kube-api-access-swrs7") pod "f5f0bdad-f857-4411-a897-baf63edc11b3" (UID: "f5f0bdad-f857-4411-a897-baf63edc11b3"). InnerVolumeSpecName "kube-api-access-swrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.100418 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swrs7\" (UniqueName: \"kubernetes.io/projected/f5f0bdad-f857-4411-a897-baf63edc11b3-kube-api-access-swrs7\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.461523 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" event={"ID":"f5f0bdad-f857-4411-a897-baf63edc11b3","Type":"ContainerDied","Data":"df0d680292fc53d9ce1f412253a4c64bc7bc94de352768dd2c720278c748ee1a"} Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.461590 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0d680292fc53d9ce1f412253a4c64bc7bc94de352768dd2c720278c748ee1a" Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.461607 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537512-gk4pk" Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.911954 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537506-t48b7"] Feb 28 03:52:04 crc kubenswrapper[4624]: I0228 03:52:04.921283 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537506-t48b7"] Feb 28 03:52:06 crc kubenswrapper[4624]: I0228 03:52:06.100391 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e032ea25-286f-4ab4-93dc-2f1aefee2245" path="/var/lib/kubelet/pods/e032ea25-286f-4ab4-93dc-2f1aefee2245/volumes" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.539503 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c94g5"] Feb 28 03:52:10 crc kubenswrapper[4624]: E0228 03:52:10.541310 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5f0bdad-f857-4411-a897-baf63edc11b3" containerName="oc" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.541399 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5f0bdad-f857-4411-a897-baf63edc11b3" containerName="oc" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.541600 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f0bdad-f857-4411-a897-baf63edc11b3" containerName="oc" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.542367 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.547436 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-dv7hn" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.547840 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.548272 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.553595 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.561597 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c94g5"] Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.600770 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x6fs5"] Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.602044 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.608982 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.617786 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn67h\" (UniqueName: \"kubernetes.io/projected/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-kube-api-access-xn67h\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.617825 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-config\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.617851 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.617879 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55076335-12f1-4069-ad43-3a527f4b349d-config\") pod \"dnsmasq-dns-675f4bcbfc-c94g5\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.617917 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzkt\" (UniqueName: \"kubernetes.io/projected/55076335-12f1-4069-ad43-3a527f4b349d-kube-api-access-pbzkt\") pod \"dnsmasq-dns-675f4bcbfc-c94g5\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.677788 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x6fs5"] Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.718704 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn67h\" (UniqueName: \"kubernetes.io/projected/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-kube-api-access-xn67h\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.718773 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-config\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.718806 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.718834 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55076335-12f1-4069-ad43-3a527f4b349d-config\") pod \"dnsmasq-dns-675f4bcbfc-c94g5\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.718878 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbzkt\" (UniqueName: \"kubernetes.io/projected/55076335-12f1-4069-ad43-3a527f4b349d-kube-api-access-pbzkt\") pod \"dnsmasq-dns-675f4bcbfc-c94g5\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.720685 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-config\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.721250 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.721838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55076335-12f1-4069-ad43-3a527f4b349d-config\") pod \"dnsmasq-dns-675f4bcbfc-c94g5\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.740150 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn67h\" (UniqueName: \"kubernetes.io/projected/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-kube-api-access-xn67h\") pod \"dnsmasq-dns-78dd6ddcc-x6fs5\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.743719 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbzkt\" (UniqueName: \"kubernetes.io/projected/55076335-12f1-4069-ad43-3a527f4b349d-kube-api-access-pbzkt\") pod \"dnsmasq-dns-675f4bcbfc-c94g5\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.860761 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:10 crc kubenswrapper[4624]: I0228 03:52:10.923835 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:11 crc kubenswrapper[4624]: I0228 03:52:11.146444 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c94g5"] Feb 28 03:52:11 crc kubenswrapper[4624]: I0228 03:52:11.214771 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x6fs5"] Feb 28 03:52:11 crc kubenswrapper[4624]: I0228 03:52:11.518351 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" event={"ID":"55076335-12f1-4069-ad43-3a527f4b349d","Type":"ContainerStarted","Data":"62aed2b89094844c9a5d373a78afa4ef3257ba6202dc0470bf1070ed7b835f46"} Feb 28 03:52:11 crc kubenswrapper[4624]: I0228 03:52:11.520619 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" event={"ID":"93cf5f4f-9846-43ce-86b8-ee85d1a39c54","Type":"ContainerStarted","Data":"8c375e70b0d517a837721767c1db9ebefff989b566d87ef79b2ea9e8170b1423"} Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.302840 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c94g5"] Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.357906 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fljlb"] Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.367782 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.401293 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fljlb"] Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.484537 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.484621 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-config\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.484666 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnntl\" (UniqueName: \"kubernetes.io/projected/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-kube-api-access-lnntl\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.591220 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnntl\" (UniqueName: \"kubernetes.io/projected/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-kube-api-access-lnntl\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.592101 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.592152 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-config\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.593013 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-config\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.593649 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.657540 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnntl\" (UniqueName: \"kubernetes.io/projected/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-kube-api-access-lnntl\") pod \"dnsmasq-dns-5ccc8479f9-fljlb\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.725557 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.740993 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x6fs5"] Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.784106 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9hgg9"] Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.785722 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.807297 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9hgg9"] Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.908980 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtft\" (UniqueName: \"kubernetes.io/projected/974aec24-b650-4ce0-912e-a926f8cf5739-kube-api-access-zrtft\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.909101 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-config\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:13 crc kubenswrapper[4624]: I0228 03:52:13.909132 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.011000 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtft\" (UniqueName: \"kubernetes.io/projected/974aec24-b650-4ce0-912e-a926f8cf5739-kube-api-access-zrtft\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.011130 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-config\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.011160 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.012037 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.013066 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-config\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.036248 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtft\" (UniqueName: \"kubernetes.io/projected/974aec24-b650-4ce0-912e-a926f8cf5739-kube-api-access-zrtft\") pod \"dnsmasq-dns-57d769cc4f-9hgg9\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.169229 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.533695 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fljlb"] Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.558339 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.559713 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.565366 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.565672 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.565976 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.566132 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pqtbp" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.576817 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.577068 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.577589 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.623152 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.626028 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" event={"ID":"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1","Type":"ContainerStarted","Data":"b5392700e3da5e24bc0b22a7d231dce4391e25f20077ec2726cee1fca3d67163"} Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740585 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740649 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740681 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740700 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740805 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6kd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-kube-api-access-rx6kd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740829 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740846 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740872 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740898 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740924 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.740954 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.772937 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9hgg9"] Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842613 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842678 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842700 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842728 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842747 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842785 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx6kd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-kube-api-access-rx6kd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842804 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842822 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842839 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842865 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.842889 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.844835 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.845054 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.845763 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.846461 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.846736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.847638 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.869738 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.871287 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.872966 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.890783 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.899811 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx6kd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-kube-api-access-rx6kd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:14 crc kubenswrapper[4624]: I0228 03:52:14.909067 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.006643 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.007899 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.015568 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jbx89" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.015898 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.016118 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.016235 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.016359 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.016940 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.023979 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.031802 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046205 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046276 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046319 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4be4f891-f796-4d4b-b916-e669037f474a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046337 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046361 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046384 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-config-data\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046403 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4be4f891-f796-4d4b-b916-e669037f474a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046424 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046445 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhx7\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-kube-api-access-fvhx7\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046464 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.046480 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153265 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153372 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153442 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4be4f891-f796-4d4b-b916-e669037f474a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153493 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153521 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153543 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-config-data\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153605 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4be4f891-f796-4d4b-b916-e669037f474a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153634 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153660 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhx7\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-kube-api-access-fvhx7\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153713 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153773 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.153933 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.154057 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.154563 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.155033 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.156005 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-config-data\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.163467 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.169518 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.169518 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4be4f891-f796-4d4b-b916-e669037f474a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.170024 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.185973 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4be4f891-f796-4d4b-b916-e669037f474a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.189159 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhx7\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-kube-api-access-fvhx7\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.203964 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.248014 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.336650 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.725556 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" event={"ID":"974aec24-b650-4ce0-912e-a926f8cf5739","Type":"ContainerStarted","Data":"2353dcbdbc8ef962a02e4b0ac4d8f352920cc9d41c9ff3dd91fd2654e76f5161"} Feb 28 03:52:15 crc kubenswrapper[4624]: I0228 03:52:15.929516 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:52:15 crc kubenswrapper[4624]: W0228 03:52:15.966763 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fc13b81_9ecc_4b66_abbd_98c7e4e1c946.slice/crio-e5d2e92cea7d4e46871c773c025cba9f645fd75ef27bbea71f7bc01664f3b3b1 WatchSource:0}: Error finding container e5d2e92cea7d4e46871c773c025cba9f645fd75ef27bbea71f7bc01664f3b3b1: Status 404 returned error can't find the container with id e5d2e92cea7d4e46871c773c025cba9f645fd75ef27bbea71f7bc01664f3b3b1 Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.266832 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:52:16 crc kubenswrapper[4624]: W0228 03:52:16.314079 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4be4f891_f796_4d4b_b916_e669037f474a.slice/crio-71a6843db037f3b2af3c2b6a89953850ec0d00eb3d1c841391debcadc1af077a WatchSource:0}: Error finding container 71a6843db037f3b2af3c2b6a89953850ec0d00eb3d1c841391debcadc1af077a: Status 404 returned error can't find the container with id 71a6843db037f3b2af3c2b6a89953850ec0d00eb3d1c841391debcadc1af077a Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.507956 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.509576 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.515789 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.516281 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.517283 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zrzxj" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.517537 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.521918 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.559590 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.605966 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9c8d03c-80e2-42fc-a320-8175c10a59c4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606043 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c8d03c-80e2-42fc-a320-8175c10a59c4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606118 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-config-data-default\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606148 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-kolla-config\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606191 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqhn\" (UniqueName: \"kubernetes.io/projected/c9c8d03c-80e2-42fc-a320-8175c10a59c4-kube-api-access-zqqhn\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606233 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.606281 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c8d03c-80e2-42fc-a320-8175c10a59c4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707622 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707722 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c8d03c-80e2-42fc-a320-8175c10a59c4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707752 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9c8d03c-80e2-42fc-a320-8175c10a59c4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707803 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c8d03c-80e2-42fc-a320-8175c10a59c4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707860 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-config-data-default\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707882 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707903 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-kolla-config\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.707936 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqhn\" (UniqueName: \"kubernetes.io/projected/c9c8d03c-80e2-42fc-a320-8175c10a59c4-kube-api-access-zqqhn\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.711477 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.712413 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-config-data-default\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.712768 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.714835 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9c8d03c-80e2-42fc-a320-8175c10a59c4-kolla-config\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.722239 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9c8d03c-80e2-42fc-a320-8175c10a59c4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.739246 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c8d03c-80e2-42fc-a320-8175c10a59c4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.741850 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c8d03c-80e2-42fc-a320-8175c10a59c4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.744458 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.747930 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqhn\" (UniqueName: \"kubernetes.io/projected/c9c8d03c-80e2-42fc-a320-8175c10a59c4-kube-api-access-zqqhn\") pod \"openstack-galera-0\" (UID: \"c9c8d03c-80e2-42fc-a320-8175c10a59c4\") " pod="openstack/openstack-galera-0" Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.763046 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946","Type":"ContainerStarted","Data":"e5d2e92cea7d4e46871c773c025cba9f645fd75ef27bbea71f7bc01664f3b3b1"} Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.768491 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4be4f891-f796-4d4b-b916-e669037f474a","Type":"ContainerStarted","Data":"71a6843db037f3b2af3c2b6a89953850ec0d00eb3d1c841391debcadc1af077a"} Feb 28 03:52:16 crc kubenswrapper[4624]: I0228 03:52:16.850752 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.159012 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.162956 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.166063 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.166375 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.166946 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-26n85" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.176161 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.179212 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220599 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnps8\" (UniqueName: \"kubernetes.io/projected/db8c8413-e456-4f82-9947-7d37578d237f-kube-api-access-qnps8\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220663 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220737 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8c8413-e456-4f82-9947-7d37578d237f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220757 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220778 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db8c8413-e456-4f82-9947-7d37578d237f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220795 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.220814 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db8c8413-e456-4f82-9947-7d37578d237f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325250 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8c8413-e456-4f82-9947-7d37578d237f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325686 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325713 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db8c8413-e456-4f82-9947-7d37578d237f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325733 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325753 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db8c8413-e456-4f82-9947-7d37578d237f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325835 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnps8\" (UniqueName: \"kubernetes.io/projected/db8c8413-e456-4f82-9947-7d37578d237f-kube-api-access-qnps8\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.325872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.326408 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.326716 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db8c8413-e456-4f82-9947-7d37578d237f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.326903 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.326953 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.328450 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db8c8413-e456-4f82-9947-7d37578d237f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.334548 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db8c8413-e456-4f82-9947-7d37578d237f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.336926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8c8413-e456-4f82-9947-7d37578d237f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.359032 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnps8\" (UniqueName: \"kubernetes.io/projected/db8c8413-e456-4f82-9947-7d37578d237f-kube-api-access-qnps8\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.363631 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db8c8413-e456-4f82-9947-7d37578d237f\") " pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.504230 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.612843 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.614064 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.625945 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kg5lz" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.626214 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.626550 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.634504 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d248fe-a92f-469e-8283-3fd135198c65-combined-ca-bundle\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.634552 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d248fe-a92f-469e-8283-3fd135198c65-memcached-tls-certs\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.634620 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgpdr\" (UniqueName: \"kubernetes.io/projected/81d248fe-a92f-469e-8283-3fd135198c65-kube-api-access-dgpdr\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.636525 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81d248fe-a92f-469e-8283-3fd135198c65-config-data\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.636598 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81d248fe-a92f-469e-8283-3fd135198c65-kolla-config\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.688431 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.748789 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81d248fe-a92f-469e-8283-3fd135198c65-config-data\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.748935 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81d248fe-a92f-469e-8283-3fd135198c65-kolla-config\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.749013 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d248fe-a92f-469e-8283-3fd135198c65-combined-ca-bundle\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.749047 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d248fe-a92f-469e-8283-3fd135198c65-memcached-tls-certs\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.749210 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgpdr\" (UniqueName: \"kubernetes.io/projected/81d248fe-a92f-469e-8283-3fd135198c65-kube-api-access-dgpdr\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.752124 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81d248fe-a92f-469e-8283-3fd135198c65-kolla-config\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.765654 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d248fe-a92f-469e-8283-3fd135198c65-combined-ca-bundle\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.792699 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81d248fe-a92f-469e-8283-3fd135198c65-config-data\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.795775 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgpdr\" (UniqueName: \"kubernetes.io/projected/81d248fe-a92f-469e-8283-3fd135198c65-kube-api-access-dgpdr\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.845342 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d248fe-a92f-469e-8283-3fd135198c65-memcached-tls-certs\") pod \"memcached-0\" (UID: \"81d248fe-a92f-469e-8283-3fd135198c65\") " pod="openstack/memcached-0" Feb 28 03:52:17 crc kubenswrapper[4624]: I0228 03:52:17.956392 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 03:52:18 crc kubenswrapper[4624]: I0228 03:52:18.275768 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 03:52:18 crc kubenswrapper[4624]: W0228 03:52:18.319363 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c8d03c_80e2_42fc_a320_8175c10a59c4.slice/crio-0af3e80b8a02c2ef39d4320017da4c7e378b3711a960f2ef7a54965b765b290a WatchSource:0}: Error finding container 0af3e80b8a02c2ef39d4320017da4c7e378b3711a960f2ef7a54965b765b290a: Status 404 returned error can't find the container with id 0af3e80b8a02c2ef39d4320017da4c7e378b3711a960f2ef7a54965b765b290a Feb 28 03:52:18 crc kubenswrapper[4624]: I0228 03:52:18.753169 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 03:52:18 crc kubenswrapper[4624]: I0228 03:52:18.934916 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 03:52:18 crc kubenswrapper[4624]: I0228 03:52:18.943462 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db8c8413-e456-4f82-9947-7d37578d237f","Type":"ContainerStarted","Data":"ab338362be2991713e502bb3e0d77ff3f45cdc2dc7cbc39ed1f5043396124831"} Feb 28 03:52:18 crc kubenswrapper[4624]: I0228 03:52:18.959260 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9c8d03c-80e2-42fc-a320-8175c10a59c4","Type":"ContainerStarted","Data":"0af3e80b8a02c2ef39d4320017da4c7e378b3711a960f2ef7a54965b765b290a"} Feb 28 03:52:19 crc kubenswrapper[4624]: W0228 03:52:19.068746 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81d248fe_a92f_469e_8283_3fd135198c65.slice/crio-49ce3eb77c411492a51138bbfe957ed8c0c2d691be862bfa0d0dea167c29b1e1 WatchSource:0}: Error finding container 49ce3eb77c411492a51138bbfe957ed8c0c2d691be862bfa0d0dea167c29b1e1: Status 404 returned error can't find the container with id 49ce3eb77c411492a51138bbfe957ed8c0c2d691be862bfa0d0dea167c29b1e1 Feb 28 03:52:19 crc kubenswrapper[4624]: I0228 03:52:19.540387 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:52:19 crc kubenswrapper[4624]: I0228 03:52:19.540467 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:52:19 crc kubenswrapper[4624]: I0228 03:52:19.540559 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:52:19 crc kubenswrapper[4624]: I0228 03:52:19.541374 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ae4a4e8e6c778ba7c9f4e2d8ca7006770f5fd2af20468097f12f94d4858478d"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:52:19 crc kubenswrapper[4624]: I0228 03:52:19.541426 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://5ae4a4e8e6c778ba7c9f4e2d8ca7006770f5fd2af20468097f12f94d4858478d" gracePeriod=600 Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.030895 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"81d248fe-a92f-469e-8283-3fd135198c65","Type":"ContainerStarted","Data":"49ce3eb77c411492a51138bbfe957ed8c0c2d691be862bfa0d0dea167c29b1e1"} Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.070143 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.074858 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.078748 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="5ae4a4e8e6c778ba7c9f4e2d8ca7006770f5fd2af20468097f12f94d4858478d" exitCode=0 Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.078793 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"5ae4a4e8e6c778ba7c9f4e2d8ca7006770f5fd2af20468097f12f94d4858478d"} Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.078830 4624 scope.go:117] "RemoveContainer" containerID="d90ea216a2f4b67d549472e18b2176a4478f7a69481157402ae530c48f3b1213" Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.079194 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mr2m4" Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.153314 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.239708 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfg97\" (UniqueName: \"kubernetes.io/projected/eaa63baf-d297-4867-b87d-2c49da381d42-kube-api-access-jfg97\") pod \"kube-state-metrics-0\" (UID: \"eaa63baf-d297-4867-b87d-2c49da381d42\") " pod="openstack/kube-state-metrics-0" Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.341657 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfg97\" (UniqueName: \"kubernetes.io/projected/eaa63baf-d297-4867-b87d-2c49da381d42-kube-api-access-jfg97\") pod \"kube-state-metrics-0\" (UID: \"eaa63baf-d297-4867-b87d-2c49da381d42\") " pod="openstack/kube-state-metrics-0" Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.399343 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfg97\" (UniqueName: \"kubernetes.io/projected/eaa63baf-d297-4867-b87d-2c49da381d42-kube-api-access-jfg97\") pod \"kube-state-metrics-0\" (UID: \"eaa63baf-d297-4867-b87d-2c49da381d42\") " pod="openstack/kube-state-metrics-0" Feb 28 03:52:20 crc kubenswrapper[4624]: I0228 03:52:20.412717 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 03:52:21 crc kubenswrapper[4624]: I0228 03:52:21.154796 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"59b30a81dc689f74ba07a6866eb43af4d862d6a65c377ecf21944e761adfa908"} Feb 28 03:52:21 crc kubenswrapper[4624]: I0228 03:52:21.662508 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:52:22 crc kubenswrapper[4624]: I0228 03:52:22.200386 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eaa63baf-d297-4867-b87d-2c49da381d42","Type":"ContainerStarted","Data":"4bf706203edc7806fcd4bb9fc74e0f89c5711644b0364a8de0e917997a5b7fd3"} Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.564572 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.568550 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.577057 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.577484 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.577734 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bxlt2" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.580505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.592938 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.592948 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689022 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw88n\" (UniqueName: \"kubernetes.io/projected/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-kube-api-access-sw88n\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689130 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689172 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689199 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689279 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689311 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689335 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.689353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791687 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791751 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791784 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw88n\" (UniqueName: \"kubernetes.io/projected/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-kube-api-access-sw88n\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791829 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791855 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791882 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791939 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.791974 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.793975 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.795159 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.796240 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.796304 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.807834 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.816432 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.817958 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw88n\" (UniqueName: \"kubernetes.io/projected/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-kube-api-access-sw88n\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.850630 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.873196 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c\") " pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:23 crc kubenswrapper[4624]: I0228 03:52:23.909570 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.580273 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-phft7"] Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.593311 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.608653 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.608838 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.608898 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7p8pv" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.616552 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-phft7"] Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.664188 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f76ww"] Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.666735 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.718911 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-run-ovn\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.718987 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-lib\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719024 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25ca2d4f-2528-442c-bfdb-7eab683203e4-scripts\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719042 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-etc-ovs\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719062 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-run\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719108 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da0269d-5fc3-487a-a49a-fa87c07af687-ovn-controller-tls-certs\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719134 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtq5s\" (UniqueName: \"kubernetes.io/projected/25ca2d4f-2528-442c-bfdb-7eab683203e4-kube-api-access-rtq5s\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719155 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-log-ovn\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.719184 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwqb\" (UniqueName: \"kubernetes.io/projected/6da0269d-5fc3-487a-a49a-fa87c07af687-kube-api-access-2rwqb\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.722588 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f76ww"] Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.727988 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da0269d-5fc3-487a-a49a-fa87c07af687-scripts\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.728094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-run\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.728124 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da0269d-5fc3-487a-a49a-fa87c07af687-combined-ca-bundle\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.728165 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-log\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831362 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da0269d-5fc3-487a-a49a-fa87c07af687-ovn-controller-tls-certs\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831420 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtq5s\" (UniqueName: \"kubernetes.io/projected/25ca2d4f-2528-442c-bfdb-7eab683203e4-kube-api-access-rtq5s\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831446 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-log-ovn\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831482 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwqb\" (UniqueName: \"kubernetes.io/projected/6da0269d-5fc3-487a-a49a-fa87c07af687-kube-api-access-2rwqb\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da0269d-5fc3-487a-a49a-fa87c07af687-scripts\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831551 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-run\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da0269d-5fc3-487a-a49a-fa87c07af687-combined-ca-bundle\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831592 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-log\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831631 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-run-ovn\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-lib\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831681 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25ca2d4f-2528-442c-bfdb-7eab683203e4-scripts\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831700 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-etc-ovs\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.831719 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-run\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.840000 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-run\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.840009 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-run\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.840276 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-etc-ovs\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.840431 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-log\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.840582 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/25ca2d4f-2528-442c-bfdb-7eab683203e4-var-lib\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.842153 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6da0269d-5fc3-487a-a49a-fa87c07af687-scripts\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.842374 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-run-ovn\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.846014 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6da0269d-5fc3-487a-a49a-fa87c07af687-var-log-ovn\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.847064 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25ca2d4f-2528-442c-bfdb-7eab683203e4-scripts\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.888259 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da0269d-5fc3-487a-a49a-fa87c07af687-combined-ca-bundle\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.888682 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da0269d-5fc3-487a-a49a-fa87c07af687-ovn-controller-tls-certs\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.889223 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtq5s\" (UniqueName: \"kubernetes.io/projected/25ca2d4f-2528-442c-bfdb-7eab683203e4-kube-api-access-rtq5s\") pod \"ovn-controller-ovs-f76ww\" (UID: \"25ca2d4f-2528-442c-bfdb-7eab683203e4\") " pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.900125 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwqb\" (UniqueName: \"kubernetes.io/projected/6da0269d-5fc3-487a-a49a-fa87c07af687-kube-api-access-2rwqb\") pod \"ovn-controller-phft7\" (UID: \"6da0269d-5fc3-487a-a49a-fa87c07af687\") " pod="openstack/ovn-controller-phft7" Feb 28 03:52:24 crc kubenswrapper[4624]: I0228 03:52:24.916307 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7" Feb 28 03:52:25 crc kubenswrapper[4624]: I0228 03:52:25.024491 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:25 crc kubenswrapper[4624]: I0228 03:52:25.833687 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-phft7"] Feb 28 03:52:26 crc kubenswrapper[4624]: I0228 03:52:26.546574 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 03:52:26 crc kubenswrapper[4624]: W0228 03:52:26.838523 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da0269d_5fc3_487a_a49a_fa87c07af687.slice/crio-ad6b9f8504c2cbff743806d741fb94cbab66dcf022c33a47b1bdd91b79e64162 WatchSource:0}: Error finding container ad6b9f8504c2cbff743806d741fb94cbab66dcf022c33a47b1bdd91b79e64162: Status 404 returned error can't find the container with id ad6b9f8504c2cbff743806d741fb94cbab66dcf022c33a47b1bdd91b79e64162 Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.013399 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.022737 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.027024 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.030345 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.030706 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.031055 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.031554 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2l5p9" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.126982 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggzt\" (UniqueName: \"kubernetes.io/projected/1463f48e-4ada-4214-b4cf-520088ae4fe4-kube-api-access-kggzt\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130669 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1463f48e-4ada-4214-b4cf-520088ae4fe4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130765 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130815 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1463f48e-4ada-4214-b4cf-520088ae4fe4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130844 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130871 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130915 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1463f48e-4ada-4214-b4cf-520088ae4fe4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.130948 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232587 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232661 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1463f48e-4ada-4214-b4cf-520088ae4fe4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232684 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232703 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232730 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1463f48e-4ada-4214-b4cf-520088ae4fe4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232749 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232804 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggzt\" (UniqueName: \"kubernetes.io/projected/1463f48e-4ada-4214-b4cf-520088ae4fe4-kube-api-access-kggzt\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.232914 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1463f48e-4ada-4214-b4cf-520088ae4fe4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.234138 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1463f48e-4ada-4214-b4cf-520088ae4fe4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.238807 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1463f48e-4ada-4214-b4cf-520088ae4fe4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.239804 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1463f48e-4ada-4214-b4cf-520088ae4fe4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.240666 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.251260 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.264680 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.268347 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1463f48e-4ada-4214-b4cf-520088ae4fe4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.270968 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggzt\" (UniqueName: \"kubernetes.io/projected/1463f48e-4ada-4214-b4cf-520088ae4fe4-kube-api-access-kggzt\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.280919 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1463f48e-4ada-4214-b4cf-520088ae4fe4\") " pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.385158 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.456547 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7" event={"ID":"6da0269d-5fc3-487a-a49a-fa87c07af687","Type":"ContainerStarted","Data":"ad6b9f8504c2cbff743806d741fb94cbab66dcf022c33a47b1bdd91b79e64162"} Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.459397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c","Type":"ContainerStarted","Data":"591e0a43c175e2e8764c9ca5719dbf2a0fd0cab7f42d9206773d62d2bf53ff1a"} Feb 28 03:52:27 crc kubenswrapper[4624]: I0228 03:52:27.621195 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f76ww"] Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.494239 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f76ww" event={"ID":"25ca2d4f-2528-442c-bfdb-7eab683203e4","Type":"ContainerStarted","Data":"5d5c0b1b88c01d3c9772ed37e5fb972acba55d28d2d5d632dc06169d66cb3f2d"} Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.683333 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b9hfd"] Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.695518 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.713905 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.731675 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b9hfd"] Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.775768 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/34bc3551-9974-4754-b285-e61f586a0b18-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.775851 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcch6\" (UniqueName: \"kubernetes.io/projected/34bc3551-9974-4754-b285-e61f586a0b18-kube-api-access-gcch6\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.775887 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/34bc3551-9974-4754-b285-e61f586a0b18-ovs-rundir\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.775939 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/34bc3551-9974-4754-b285-e61f586a0b18-ovn-rundir\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.775966 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34bc3551-9974-4754-b285-e61f586a0b18-combined-ca-bundle\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.776002 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bc3551-9974-4754-b285-e61f586a0b18-config\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.877609 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcch6\" (UniqueName: \"kubernetes.io/projected/34bc3551-9974-4754-b285-e61f586a0b18-kube-api-access-gcch6\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.877671 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/34bc3551-9974-4754-b285-e61f586a0b18-ovs-rundir\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.877722 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/34bc3551-9974-4754-b285-e61f586a0b18-ovn-rundir\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.877751 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34bc3551-9974-4754-b285-e61f586a0b18-combined-ca-bundle\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.877780 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bc3551-9974-4754-b285-e61f586a0b18-config\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.877836 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/34bc3551-9974-4754-b285-e61f586a0b18-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.881713 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/34bc3551-9974-4754-b285-e61f586a0b18-ovn-rundir\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.882260 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/34bc3551-9974-4754-b285-e61f586a0b18-ovs-rundir\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.885540 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34bc3551-9974-4754-b285-e61f586a0b18-config\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.910042 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34bc3551-9974-4754-b285-e61f586a0b18-combined-ca-bundle\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.910705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/34bc3551-9974-4754-b285-e61f586a0b18-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:28 crc kubenswrapper[4624]: I0228 03:52:28.916976 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcch6\" (UniqueName: \"kubernetes.io/projected/34bc3551-9974-4754-b285-e61f586a0b18-kube-api-access-gcch6\") pod \"ovn-controller-metrics-b9hfd\" (UID: \"34bc3551-9974-4754-b285-e61f586a0b18\") " pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.025628 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9hgg9"] Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.048547 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b9hfd" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.083279 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-vmvq9"] Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.084974 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.088539 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.114760 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-vmvq9"] Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.184540 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.184610 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-config\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.184722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hws\" (UniqueName: \"kubernetes.io/projected/fda63c97-ade0-4543-b58a-16d10e3e89b6-kube-api-access-b9hws\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.184761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.288631 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9hws\" (UniqueName: \"kubernetes.io/projected/fda63c97-ade0-4543-b58a-16d10e3e89b6-kube-api-access-b9hws\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.288695 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.288758 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.288793 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-config\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.290104 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-config\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.291722 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.292397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.316139 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9hws\" (UniqueName: \"kubernetes.io/projected/fda63c97-ade0-4543-b58a-16d10e3e89b6-kube-api-access-b9hws\") pod \"dnsmasq-dns-6bc7876d45-vmvq9\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:29 crc kubenswrapper[4624]: I0228 03:52:29.435389 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:41 crc kubenswrapper[4624]: E0228 03:52:41.737941 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 28 03:52:41 crc kubenswrapper[4624]: E0228 03:52:41.738958 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n84h59h66bh5b8h68dh666h5f9h5dch64h659h668h5c9h569hfdh59dh86h56bh8h57ch5b9hb4h669h54ch656hfch6dh5b4h586h54fh566h558h695q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgpdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(81d248fe-a92f-469e-8283-3fd135198c65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:41 crc kubenswrapper[4624]: E0228 03:52:41.740180 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="81d248fe-a92f-469e-8283-3fd135198c65" Feb 28 03:52:42 crc kubenswrapper[4624]: E0228 03:52:42.658287 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="81d248fe-a92f-469e-8283-3fd135198c65" Feb 28 03:52:43 crc kubenswrapper[4624]: E0228 03:52:43.074929 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 28 03:52:43 crc kubenswrapper[4624]: E0228 03:52:43.075216 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rx6kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(4fc13b81-9ecc-4b66-abbd-98c7e4e1c946): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:43 crc kubenswrapper[4624]: E0228 03:52:43.076468 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" Feb 28 03:52:43 crc kubenswrapper[4624]: E0228 03:52:43.665949 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" Feb 28 03:52:44 crc kubenswrapper[4624]: E0228 03:52:44.955556 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 28 03:52:44 crc kubenswrapper[4624]: E0228 03:52:44.956326 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvhx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(4be4f891-f796-4d4b-b916-e669037f474a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:44 crc kubenswrapper[4624]: E0228 03:52:44.958018 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="4be4f891-f796-4d4b-b916-e669037f474a" Feb 28 03:52:44 crc kubenswrapper[4624]: E0228 03:52:44.965600 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 28 03:52:44 crc kubenswrapper[4624]: E0228 03:52:44.965727 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqqhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(c9c8d03c-80e2-42fc-a320-8175c10a59c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:44 crc kubenswrapper[4624]: E0228 03:52:44.966966 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="c9c8d03c-80e2-42fc-a320-8175c10a59c4" Feb 28 03:52:45 crc kubenswrapper[4624]: E0228 03:52:45.069269 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 28 03:52:45 crc kubenswrapper[4624]: E0228 03:52:45.069531 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnps8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(db8c8413-e456-4f82-9947-7d37578d237f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:45 crc kubenswrapper[4624]: E0228 03:52:45.070784 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="db8c8413-e456-4f82-9947-7d37578d237f" Feb 28 03:52:45 crc kubenswrapper[4624]: E0228 03:52:45.695185 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="c9c8d03c-80e2-42fc-a320-8175c10a59c4" Feb 28 03:52:45 crc kubenswrapper[4624]: E0228 03:52:45.695739 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="db8c8413-e456-4f82-9947-7d37578d237f" Feb 28 03:52:45 crc kubenswrapper[4624]: E0228 03:52:45.695732 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="4be4f891-f796-4d4b-b916-e669037f474a" Feb 28 03:52:47 crc kubenswrapper[4624]: I0228 03:52:47.118317 4624 scope.go:117] "RemoveContainer" containerID="5f1faf0c35e070e364ff9019d56b40f992e90347a4e439f9c1a80f2036fe03ec" Feb 28 03:52:51 crc kubenswrapper[4624]: I0228 03:52:51.644848 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 03:52:51 crc kubenswrapper[4624]: E0228 03:52:51.726286 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Feb 28 03:52:51 crc kubenswrapper[4624]: E0228 03:52:51.727338 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h675h567h5bch5bfh68ch554h5f5h78h67ch84hd7h575h686h69h5d9hch577h546h647h97h88h68fh54h584hdch5f6h584h5cfh84h564h67bq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw88n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:52 crc kubenswrapper[4624]: E0228 03:52:52.335186 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 28 03:52:52 crc kubenswrapper[4624]: E0228 03:52:52.335499 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbch55bh55h597h5c4h668h67bh646h6fh64dh597h658h695h587h59ch5bbh648h86h58bhcch54dh5d6h6fh5f5hfh54ch5f9h558h7dh697h554hdq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rwqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-phft7_openstack(6da0269d-5fc3-487a-a49a-fa87c07af687): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:52 crc kubenswrapper[4624]: E0228 03:52:52.336911 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-phft7" podUID="6da0269d-5fc3-487a-a49a-fa87c07af687" Feb 28 03:52:52 crc kubenswrapper[4624]: E0228 03:52:52.752360 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-phft7" podUID="6da0269d-5fc3-487a-a49a-fa87c07af687" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.282645 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.283323 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbzkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-c94g5_openstack(55076335-12f1-4069-ad43-3a527f4b349d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.284614 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" podUID="55076335-12f1-4069-ad43-3a527f4b349d" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.292836 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.293146 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrtft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-9hgg9_openstack(974aec24-b650-4ce0-912e-a926f8cf5739): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.294880 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" podUID="974aec24-b650-4ce0-912e-a926f8cf5739" Feb 28 03:52:53 crc kubenswrapper[4624]: W0228 03:52:53.379292 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1463f48e_4ada_4214_b4cf_520088ae4fe4.slice/crio-2b542e53f1872db672c0fb7c304bf190da647eca2c22f6a33fd5a9383374f1a5 WatchSource:0}: Error finding container 2b542e53f1872db672c0fb7c304bf190da647eca2c22f6a33fd5a9383374f1a5: Status 404 returned error can't find the container with id 2b542e53f1872db672c0fb7c304bf190da647eca2c22f6a33fd5a9383374f1a5 Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.392036 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.392345 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn67h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-x6fs5_openstack(93cf5f4f-9846-43ce-86b8-ee85d1a39c54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.393577 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" podUID="93cf5f4f-9846-43ce-86b8-ee85d1a39c54" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.407496 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.408022 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnntl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-fljlb_openstack(8280f070-dc2c-41b0-bd3b-d67bbf8f96e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.409599 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" podUID="8280f070-dc2c-41b0-bd3b-d67bbf8f96e1" Feb 28 03:52:53 crc kubenswrapper[4624]: I0228 03:52:53.753313 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b9hfd"] Feb 28 03:52:53 crc kubenswrapper[4624]: I0228 03:52:53.764629 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1463f48e-4ada-4214-b4cf-520088ae4fe4","Type":"ContainerStarted","Data":"2b542e53f1872db672c0fb7c304bf190da647eca2c22f6a33fd5a9383374f1a5"} Feb 28 03:52:53 crc kubenswrapper[4624]: E0228 03:52:53.768488 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" podUID="8280f070-dc2c-41b0-bd3b-d67bbf8f96e1" Feb 28 03:52:53 crc kubenswrapper[4624]: I0228 03:52:53.928588 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-vmvq9"] Feb 28 03:52:54 crc kubenswrapper[4624]: E0228 03:52:54.397359 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 28 03:52:54 crc kubenswrapper[4624]: E0228 03:52:54.397417 4624 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 28 03:52:54 crc kubenswrapper[4624]: E0228 03:52:54.397599 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfg97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(eaa63baf-d297-4867-b87d-2c49da381d42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:52:54 crc kubenswrapper[4624]: E0228 03:52:54.399489 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" Feb 28 03:52:54 crc kubenswrapper[4624]: W0228 03:52:54.417677 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34bc3551_9974_4754_b285_e61f586a0b18.slice/crio-c20971e50668ed506e0258f7c0b50d19b59cbfc7861f60297db024e6161e690d WatchSource:0}: Error finding container c20971e50668ed506e0258f7c0b50d19b59cbfc7861f60297db024e6161e690d: Status 404 returned error can't find the container with id c20971e50668ed506e0258f7c0b50d19b59cbfc7861f60297db024e6161e690d Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.528562 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.539847 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.561968 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbzkt\" (UniqueName: \"kubernetes.io/projected/55076335-12f1-4069-ad43-3a527f4b349d-kube-api-access-pbzkt\") pod \"55076335-12f1-4069-ad43-3a527f4b349d\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.562013 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55076335-12f1-4069-ad43-3a527f4b349d-config\") pod \"55076335-12f1-4069-ad43-3a527f4b349d\" (UID: \"55076335-12f1-4069-ad43-3a527f4b349d\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.563804 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.569997 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55076335-12f1-4069-ad43-3a527f4b349d-config" (OuterVolumeSpecName: "config") pod "55076335-12f1-4069-ad43-3a527f4b349d" (UID: "55076335-12f1-4069-ad43-3a527f4b349d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.587280 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55076335-12f1-4069-ad43-3a527f4b349d-kube-api-access-pbzkt" (OuterVolumeSpecName: "kube-api-access-pbzkt") pod "55076335-12f1-4069-ad43-3a527f4b349d" (UID: "55076335-12f1-4069-ad43-3a527f4b349d"). InnerVolumeSpecName "kube-api-access-pbzkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664006 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-config\") pod \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664099 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrtft\" (UniqueName: \"kubernetes.io/projected/974aec24-b650-4ce0-912e-a926f8cf5739-kube-api-access-zrtft\") pod \"974aec24-b650-4ce0-912e-a926f8cf5739\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664179 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-dns-svc\") pod \"974aec24-b650-4ce0-912e-a926f8cf5739\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664307 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn67h\" (UniqueName: \"kubernetes.io/projected/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-kube-api-access-xn67h\") pod \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664481 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-config\") pod \"974aec24-b650-4ce0-912e-a926f8cf5739\" (UID: \"974aec24-b650-4ce0-912e-a926f8cf5739\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664506 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-dns-svc\") pod \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\" (UID: \"93cf5f4f-9846-43ce-86b8-ee85d1a39c54\") " Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664939 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbzkt\" (UniqueName: \"kubernetes.io/projected/55076335-12f1-4069-ad43-3a527f4b349d-kube-api-access-pbzkt\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.664960 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55076335-12f1-4069-ad43-3a527f4b349d-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.665438 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "974aec24-b650-4ce0-912e-a926f8cf5739" (UID: "974aec24-b650-4ce0-912e-a926f8cf5739"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.665465 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-config" (OuterVolumeSpecName: "config") pod "974aec24-b650-4ce0-912e-a926f8cf5739" (UID: "974aec24-b650-4ce0-912e-a926f8cf5739"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.665526 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93cf5f4f-9846-43ce-86b8-ee85d1a39c54" (UID: "93cf5f4f-9846-43ce-86b8-ee85d1a39c54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.665768 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-config" (OuterVolumeSpecName: "config") pod "93cf5f4f-9846-43ce-86b8-ee85d1a39c54" (UID: "93cf5f4f-9846-43ce-86b8-ee85d1a39c54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.674387 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-kube-api-access-xn67h" (OuterVolumeSpecName: "kube-api-access-xn67h") pod "93cf5f4f-9846-43ce-86b8-ee85d1a39c54" (UID: "93cf5f4f-9846-43ce-86b8-ee85d1a39c54"). InnerVolumeSpecName "kube-api-access-xn67h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.674443 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974aec24-b650-4ce0-912e-a926f8cf5739-kube-api-access-zrtft" (OuterVolumeSpecName: "kube-api-access-zrtft") pod "974aec24-b650-4ce0-912e-a926f8cf5739" (UID: "974aec24-b650-4ce0-912e-a926f8cf5739"). InnerVolumeSpecName "kube-api-access-zrtft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.771940 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.772548 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.772559 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.772626 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrtft\" (UniqueName: \"kubernetes.io/projected/974aec24-b650-4ce0-912e-a926f8cf5739-kube-api-access-zrtft\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.772638 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974aec24-b650-4ce0-912e-a926f8cf5739-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.772686 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn67h\" (UniqueName: \"kubernetes.io/projected/93cf5f4f-9846-43ce-86b8-ee85d1a39c54-kube-api-access-xn67h\") on node \"crc\" DevicePath \"\"" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.792345 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.792363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x6fs5" event={"ID":"93cf5f4f-9846-43ce-86b8-ee85d1a39c54","Type":"ContainerDied","Data":"8c375e70b0d517a837721767c1db9ebefff989b566d87ef79b2ea9e8170b1423"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.796575 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" event={"ID":"55076335-12f1-4069-ad43-3a527f4b349d","Type":"ContainerDied","Data":"62aed2b89094844c9a5d373a78afa4ef3257ba6202dc0470bf1070ed7b835f46"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.796606 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c94g5" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.807475 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.808151 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9hgg9" event={"ID":"974aec24-b650-4ce0-912e-a926f8cf5739","Type":"ContainerDied","Data":"2353dcbdbc8ef962a02e4b0ac4d8f352920cc9d41c9ff3dd91fd2654e76f5161"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.819315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" event={"ID":"fda63c97-ade0-4543-b58a-16d10e3e89b6","Type":"ContainerStarted","Data":"b5f4458dec5a1d8df069975fa5ca506eb974ab40c349398b6cc611e8e9453a28"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.822227 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"81d248fe-a92f-469e-8283-3fd135198c65","Type":"ContainerStarted","Data":"b38a2d974a3dc48722ddc824972de82281b0ceff584bf623c929480275b2f848"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.823917 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.827988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f76ww" event={"ID":"25ca2d4f-2528-442c-bfdb-7eab683203e4","Type":"ContainerStarted","Data":"47021bf1f8ae3d166d35b5fb7ab9d7b4bcad707de0e56c3776974c28dc7139af"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.833464 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b9hfd" event={"ID":"34bc3551-9974-4754-b285-e61f586a0b18","Type":"ContainerStarted","Data":"c20971e50668ed506e0258f7c0b50d19b59cbfc7861f60297db024e6161e690d"} Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.856013 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.360122727 podStartE2EDuration="37.855984462s" podCreationTimestamp="2026-02-28 03:52:17 +0000 UTC" firstStartedPulling="2026-02-28 03:52:19.093306909 +0000 UTC m=+993.757346218" lastFinishedPulling="2026-02-28 03:52:54.589168644 +0000 UTC m=+1029.253207953" observedRunningTime="2026-02-28 03:52:54.854780259 +0000 UTC m=+1029.518819568" watchObservedRunningTime="2026-02-28 03:52:54.855984462 +0000 UTC m=+1029.520023771" Feb 28 03:52:54 crc kubenswrapper[4624]: E0228 03:52:54.858784 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.989513 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c94g5"] Feb 28 03:52:54 crc kubenswrapper[4624]: I0228 03:52:54.994926 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c94g5"] Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.038932 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9hgg9"] Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.052812 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9hgg9"] Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.080567 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x6fs5"] Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.087458 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x6fs5"] Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.851344 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1463f48e-4ada-4214-b4cf-520088ae4fe4","Type":"ContainerStarted","Data":"288c81ea01eb67609add4778f989a472459dd4fbdf11199f4602d6e0aff821cb"} Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.854147 4624 generic.go:334] "Generic (PLEG): container finished" podID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerID="1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0" exitCode=0 Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.854217 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" event={"ID":"fda63c97-ade0-4543-b58a-16d10e3e89b6","Type":"ContainerDied","Data":"1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0"} Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.859276 4624 generic.go:334] "Generic (PLEG): container finished" podID="25ca2d4f-2528-442c-bfdb-7eab683203e4" containerID="47021bf1f8ae3d166d35b5fb7ab9d7b4bcad707de0e56c3776974c28dc7139af" exitCode=0 Feb 28 03:52:55 crc kubenswrapper[4624]: I0228 03:52:55.859512 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f76ww" event={"ID":"25ca2d4f-2528-442c-bfdb-7eab683203e4","Type":"ContainerDied","Data":"47021bf1f8ae3d166d35b5fb7ab9d7b4bcad707de0e56c3776974c28dc7139af"} Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.100311 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55076335-12f1-4069-ad43-3a527f4b349d" path="/var/lib/kubelet/pods/55076335-12f1-4069-ad43-3a527f4b349d/volumes" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.101688 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93cf5f4f-9846-43ce-86b8-ee85d1a39c54" path="/var/lib/kubelet/pods/93cf5f4f-9846-43ce-86b8-ee85d1a39c54/volumes" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.102147 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974aec24-b650-4ce0-912e-a926f8cf5739" path="/var/lib/kubelet/pods/974aec24-b650-4ce0-912e-a926f8cf5739/volumes" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.874766 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" event={"ID":"fda63c97-ade0-4543-b58a-16d10e3e89b6","Type":"ContainerStarted","Data":"960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d"} Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.875511 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.886703 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f76ww" event={"ID":"25ca2d4f-2528-442c-bfdb-7eab683203e4","Type":"ContainerStarted","Data":"7513f2145d311258fd0d502494aba0107751f39a52437275cf058ab3b6516163"} Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.887070 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.887111 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f76ww" event={"ID":"25ca2d4f-2528-442c-bfdb-7eab683203e4","Type":"ContainerStarted","Data":"f566faa81a6c9b1dfa71891a19f2ed34fb924220779722d10161421143e0189b"} Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.887128 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.908686 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" podStartSLOduration=27.450385754 podStartE2EDuration="27.908633311s" podCreationTimestamp="2026-02-28 03:52:29 +0000 UTC" firstStartedPulling="2026-02-28 03:52:54.402434592 +0000 UTC m=+1029.066473901" lastFinishedPulling="2026-02-28 03:52:54.860682149 +0000 UTC m=+1029.524721458" observedRunningTime="2026-02-28 03:52:56.907463969 +0000 UTC m=+1031.571503268" watchObservedRunningTime="2026-02-28 03:52:56.908633311 +0000 UTC m=+1031.572672620" Feb 28 03:52:56 crc kubenswrapper[4624]: I0228 03:52:56.938442 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f76ww" podStartSLOduration=7.222177479 podStartE2EDuration="32.938413869s" podCreationTimestamp="2026-02-28 03:52:24 +0000 UTC" firstStartedPulling="2026-02-28 03:52:27.743947546 +0000 UTC m=+1002.407986855" lastFinishedPulling="2026-02-28 03:52:53.460183936 +0000 UTC m=+1028.124223245" observedRunningTime="2026-02-28 03:52:56.936525849 +0000 UTC m=+1031.600565168" watchObservedRunningTime="2026-02-28 03:52:56.938413869 +0000 UTC m=+1031.602453188" Feb 28 03:53:00 crc kubenswrapper[4624]: I0228 03:53:00.931031 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946","Type":"ContainerStarted","Data":"95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5"} Feb 28 03:53:01 crc kubenswrapper[4624]: I0228 03:53:01.939898 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db8c8413-e456-4f82-9947-7d37578d237f","Type":"ContainerStarted","Data":"042335941a231442b67345d5da9c48c24b777ae619d93fcfab2aa6cbadc5a04e"} Feb 28 03:53:02 crc kubenswrapper[4624]: E0228 03:53:02.223753 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c" Feb 28 03:53:02 crc kubenswrapper[4624]: I0228 03:53:02.950518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9c8d03c-80e2-42fc-a320-8175c10a59c4","Type":"ContainerStarted","Data":"48a80edf60c2dcceaafe8b08bdd49214f66cb781a05b6fbb5ccd3c082f6f0d33"} Feb 28 03:53:02 crc kubenswrapper[4624]: I0228 03:53:02.954227 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c","Type":"ContainerStarted","Data":"8d8f029773c648b2670900c51ce448d0a6876ccb21a2d3c4984ec6824f806763"} Feb 28 03:53:02 crc kubenswrapper[4624]: I0228 03:53:02.958349 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 28 03:53:02 crc kubenswrapper[4624]: I0228 03:53:02.960476 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b9hfd" event={"ID":"34bc3551-9974-4754-b285-e61f586a0b18","Type":"ContainerStarted","Data":"3bdb1eb25e7d2d85bc8896a48469513c5896a9502ea148950b293bfca6f71d2c"} Feb 28 03:53:02 crc kubenswrapper[4624]: I0228 03:53:02.966634 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1463f48e-4ada-4214-b4cf-520088ae4fe4","Type":"ContainerStarted","Data":"282b5b5b36e1057376833ae4685eecb7091301922ef19cab674cc0381f1c0d48"} Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.010997 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=29.993599281 podStartE2EDuration="38.010977246s" podCreationTimestamp="2026-02-28 03:52:25 +0000 UTC" firstStartedPulling="2026-02-28 03:52:53.417470916 +0000 UTC m=+1028.081510225" lastFinishedPulling="2026-02-28 03:53:01.434848881 +0000 UTC m=+1036.098888190" observedRunningTime="2026-02-28 03:53:03.00485388 +0000 UTC m=+1037.668893219" watchObservedRunningTime="2026-02-28 03:53:03.010977246 +0000 UTC m=+1037.675016565" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.050281 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b9hfd" podStartSLOduration=28.019270122000002 podStartE2EDuration="35.050255583s" podCreationTimestamp="2026-02-28 03:52:28 +0000 UTC" firstStartedPulling="2026-02-28 03:52:54.422468716 +0000 UTC m=+1029.086508025" lastFinishedPulling="2026-02-28 03:53:01.453454177 +0000 UTC m=+1036.117493486" observedRunningTime="2026-02-28 03:53:03.02956327 +0000 UTC m=+1037.693602579" watchObservedRunningTime="2026-02-28 03:53:03.050255583 +0000 UTC m=+1037.714294892" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.385675 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.457903 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.619491 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fljlb"] Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.680931 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-jtchm"] Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.682635 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.697053 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.707947 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jtchm"] Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.759153 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-dns-svc\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.759702 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.759747 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kfj\" (UniqueName: \"kubernetes.io/projected/10b86025-b1c0-4a98-a9bb-e510b2d14b66-kube-api-access-j6kfj\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.759801 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.759828 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-config\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.861433 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.862811 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kfj\" (UniqueName: \"kubernetes.io/projected/10b86025-b1c0-4a98-a9bb-e510b2d14b66-kube-api-access-j6kfj\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.862946 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.863058 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-config\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.863160 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-dns-svc\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.862755 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.864275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-config\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.867033 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.867661 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-dns-svc\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.905609 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kfj\" (UniqueName: \"kubernetes.io/projected/10b86025-b1c0-4a98-a9bb-e510b2d14b66-kube-api-access-j6kfj\") pod \"dnsmasq-dns-8554648995-jtchm\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.981026 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" event={"ID":"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1","Type":"ContainerDied","Data":"b5392700e3da5e24bc0b22a7d231dce4391e25f20077ec2726cee1fca3d67163"} Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.981118 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5392700e3da5e24bc0b22a7d231dce4391e25f20077ec2726cee1fca3d67163" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.983335 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.984348 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4be4f891-f796-4d4b-b916-e669037f474a","Type":"ContainerStarted","Data":"8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0"} Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.987393 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c","Type":"ContainerStarted","Data":"c7062142c23ec07e8d6f8c82a5b2fcf293f43728b2326a145992ffd882d1bade"} Feb 28 03:53:03 crc kubenswrapper[4624]: I0228 03:53:03.987466 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.000691 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.036018 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.065774 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-config\") pod \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.065846 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-dns-svc\") pod \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.065887 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnntl\" (UniqueName: \"kubernetes.io/projected/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-kube-api-access-lnntl\") pod \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\" (UID: \"8280f070-dc2c-41b0-bd3b-d67bbf8f96e1\") " Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.066452 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-config" (OuterVolumeSpecName: "config") pod "8280f070-dc2c-41b0-bd3b-d67bbf8f96e1" (UID: "8280f070-dc2c-41b0-bd3b-d67bbf8f96e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.068199 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8280f070-dc2c-41b0-bd3b-d67bbf8f96e1" (UID: "8280f070-dc2c-41b0-bd3b-d67bbf8f96e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.073414 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-kube-api-access-lnntl" (OuterVolumeSpecName: "kube-api-access-lnntl") pod "8280f070-dc2c-41b0-bd3b-d67bbf8f96e1" (UID: "8280f070-dc2c-41b0-bd3b-d67bbf8f96e1"). InnerVolumeSpecName "kube-api-access-lnntl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.078919 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.48788657 podStartE2EDuration="42.078900655s" podCreationTimestamp="2026-02-28 03:52:22 +0000 UTC" firstStartedPulling="2026-02-28 03:52:26.843712592 +0000 UTC m=+1001.507751901" lastFinishedPulling="2026-02-28 03:53:03.434726677 +0000 UTC m=+1038.098765986" observedRunningTime="2026-02-28 03:53:04.078237477 +0000 UTC m=+1038.742276786" watchObservedRunningTime="2026-02-28 03:53:04.078900655 +0000 UTC m=+1038.742939964" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.170673 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.171027 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnntl\" (UniqueName: \"kubernetes.io/projected/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-kube-api-access-lnntl\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.171039 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.437989 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.587012 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jtchm"] Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.995720 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fljlb" Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.997741 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jtchm" event={"ID":"10b86025-b1c0-4a98-a9bb-e510b2d14b66","Type":"ContainerStarted","Data":"6ef60551712aa7da9b2e56bdd6a5ff14d084be850bc0cfb065658a9d74051d23"} Feb 28 03:53:04 crc kubenswrapper[4624]: I0228 03:53:04.997805 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jtchm" event={"ID":"10b86025-b1c0-4a98-a9bb-e510b2d14b66","Type":"ContainerStarted","Data":"bd676611a1805fe88219683aa1f0e8d011c44c9225fd9d2d3e342ffd06623694"} Feb 28 03:53:05 crc kubenswrapper[4624]: I0228 03:53:05.097947 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fljlb"] Feb 28 03:53:05 crc kubenswrapper[4624]: I0228 03:53:05.104966 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fljlb"] Feb 28 03:53:05 crc kubenswrapper[4624]: I0228 03:53:05.910077 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 28 03:53:06 crc kubenswrapper[4624]: I0228 03:53:06.005863 4624 generic.go:334] "Generic (PLEG): container finished" podID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerID="6ef60551712aa7da9b2e56bdd6a5ff14d084be850bc0cfb065658a9d74051d23" exitCode=0 Feb 28 03:53:06 crc kubenswrapper[4624]: I0228 03:53:06.005923 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jtchm" event={"ID":"10b86025-b1c0-4a98-a9bb-e510b2d14b66","Type":"ContainerDied","Data":"6ef60551712aa7da9b2e56bdd6a5ff14d084be850bc0cfb065658a9d74051d23"} Feb 28 03:53:06 crc kubenswrapper[4624]: I0228 03:53:06.104403 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8280f070-dc2c-41b0-bd3b-d67bbf8f96e1" path="/var/lib/kubelet/pods/8280f070-dc2c-41b0-bd3b-d67bbf8f96e1/volumes" Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.016424 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jtchm" event={"ID":"10b86025-b1c0-4a98-a9bb-e510b2d14b66","Type":"ContainerStarted","Data":"ca313d442ce79b4fe757e4ddc26dd2cd169dbeadedcf59adb7808c14b6e30c30"} Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.016858 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.028623 4624 generic.go:334] "Generic (PLEG): container finished" podID="db8c8413-e456-4f82-9947-7d37578d237f" containerID="042335941a231442b67345d5da9c48c24b777ae619d93fcfab2aa6cbadc5a04e" exitCode=0 Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.028728 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db8c8413-e456-4f82-9947-7d37578d237f","Type":"ContainerDied","Data":"042335941a231442b67345d5da9c48c24b777ae619d93fcfab2aa6cbadc5a04e"} Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.045453 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7" event={"ID":"6da0269d-5fc3-487a-a49a-fa87c07af687","Type":"ContainerStarted","Data":"d529725f60ff459f9f187706a4000320affb54049d3c9e5e4c7df49b4e17df8a"} Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.046592 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-phft7" Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.051582 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-jtchm" podStartSLOduration=4.051561687 podStartE2EDuration="4.051561687s" podCreationTimestamp="2026-02-28 03:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:07.045015767 +0000 UTC m=+1041.709055066" watchObservedRunningTime="2026-02-28 03:53:07.051561687 +0000 UTC m=+1041.715601006" Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.053013 4624 generic.go:334] "Generic (PLEG): container finished" podID="c9c8d03c-80e2-42fc-a320-8175c10a59c4" containerID="48a80edf60c2dcceaafe8b08bdd49214f66cb781a05b6fbb5ccd3c082f6f0d33" exitCode=0 Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.053049 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9c8d03c-80e2-42fc-a320-8175c10a59c4","Type":"ContainerDied","Data":"48a80edf60c2dcceaafe8b08bdd49214f66cb781a05b6fbb5ccd3c082f6f0d33"} Feb 28 03:53:07 crc kubenswrapper[4624]: I0228 03:53:07.130709 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-phft7" podStartSLOduration=3.41175214 podStartE2EDuration="43.130687434s" podCreationTimestamp="2026-02-28 03:52:24 +0000 UTC" firstStartedPulling="2026-02-28 03:52:26.874457057 +0000 UTC m=+1001.538496366" lastFinishedPulling="2026-02-28 03:53:06.593392351 +0000 UTC m=+1041.257431660" observedRunningTime="2026-02-28 03:53:07.100849276 +0000 UTC m=+1041.764888595" watchObservedRunningTime="2026-02-28 03:53:07.130687434 +0000 UTC m=+1041.794726743" Feb 28 03:53:08 crc kubenswrapper[4624]: I0228 03:53:08.063854 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9c8d03c-80e2-42fc-a320-8175c10a59c4","Type":"ContainerStarted","Data":"f6bba6afb4418a80d4bcc7a7f1ceab07807bd5030608fdb0c31d1f5bfa3ce5b2"} Feb 28 03:53:08 crc kubenswrapper[4624]: I0228 03:53:08.068350 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db8c8413-e456-4f82-9947-7d37578d237f","Type":"ContainerStarted","Data":"c2a3d7186d5abef9fe0e5295f7c09b9f4dea0c88b9df1d5d687729bfaaffbfbb"} Feb 28 03:53:08 crc kubenswrapper[4624]: I0228 03:53:08.090784 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.027732712 podStartE2EDuration="53.090756333s" podCreationTimestamp="2026-02-28 03:52:15 +0000 UTC" firstStartedPulling="2026-02-28 03:52:18.385630335 +0000 UTC m=+993.049669644" lastFinishedPulling="2026-02-28 03:53:01.448653956 +0000 UTC m=+1036.112693265" observedRunningTime="2026-02-28 03:53:08.08954112 +0000 UTC m=+1042.753580469" watchObservedRunningTime="2026-02-28 03:53:08.090756333 +0000 UTC m=+1042.754795642" Feb 28 03:53:08 crc kubenswrapper[4624]: I0228 03:53:08.133270 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.512239863 podStartE2EDuration="52.133237927s" podCreationTimestamp="2026-02-28 03:52:16 +0000 UTC" firstStartedPulling="2026-02-28 03:52:18.808218174 +0000 UTC m=+993.472257483" lastFinishedPulling="2026-02-28 03:53:01.429216238 +0000 UTC m=+1036.093255547" observedRunningTime="2026-02-28 03:53:08.129061022 +0000 UTC m=+1042.793100341" watchObservedRunningTime="2026-02-28 03:53:08.133237927 +0000 UTC m=+1042.797277236" Feb 28 03:53:08 crc kubenswrapper[4624]: I0228 03:53:08.910740 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 28 03:53:08 crc kubenswrapper[4624]: I0228 03:53:08.951745 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.090927 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eaa63baf-d297-4867-b87d-2c49da381d42","Type":"ContainerStarted","Data":"7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3"} Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.092673 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.115251 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.2910411 podStartE2EDuration="49.115221656s" podCreationTimestamp="2026-02-28 03:52:20 +0000 UTC" firstStartedPulling="2026-02-28 03:52:21.706562255 +0000 UTC m=+996.370601564" lastFinishedPulling="2026-02-28 03:53:08.530742771 +0000 UTC m=+1043.194782120" observedRunningTime="2026-02-28 03:53:09.11059602 +0000 UTC m=+1043.774635329" watchObservedRunningTime="2026-02-28 03:53:09.115221656 +0000 UTC m=+1043.779260965" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.138173 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.335679 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.337021 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.339725 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.340201 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.340363 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mn8vt" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.340929 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.367502 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439515 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjt8h\" (UniqueName: \"kubernetes.io/projected/5e11975f-5910-43a1-91ed-2633d3576fce-kube-api-access-fjt8h\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439641 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439679 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439723 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e11975f-5910-43a1-91ed-2633d3576fce-config\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439749 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439812 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e11975f-5910-43a1-91ed-2633d3576fce-scripts\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.439855 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e11975f-5910-43a1-91ed-2633d3576fce-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.541808 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e11975f-5910-43a1-91ed-2633d3576fce-scripts\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.541861 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e11975f-5910-43a1-91ed-2633d3576fce-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.541907 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjt8h\" (UniqueName: \"kubernetes.io/projected/5e11975f-5910-43a1-91ed-2633d3576fce-kube-api-access-fjt8h\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.541972 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.542216 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.542577 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e11975f-5910-43a1-91ed-2633d3576fce-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.543072 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e11975f-5910-43a1-91ed-2633d3576fce-scripts\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.543456 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e11975f-5910-43a1-91ed-2633d3576fce-config\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.543518 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.544853 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e11975f-5910-43a1-91ed-2633d3576fce-config\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.550635 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.550824 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.566868 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e11975f-5910-43a1-91ed-2633d3576fce-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.567481 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjt8h\" (UniqueName: \"kubernetes.io/projected/5e11975f-5910-43a1-91ed-2633d3576fce-kube-api-access-fjt8h\") pod \"ovn-northd-0\" (UID: \"5e11975f-5910-43a1-91ed-2633d3576fce\") " pod="openstack/ovn-northd-0" Feb 28 03:53:09 crc kubenswrapper[4624]: I0228 03:53:09.659390 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.144425 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 03:53:10 crc kubenswrapper[4624]: W0228 03:53:10.154935 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e11975f_5910_43a1_91ed_2633d3576fce.slice/crio-3649623b2e4937c1e7366c91fc571bbcfd7438a54cda0299ee9f4903d4bc94b7 WatchSource:0}: Error finding container 3649623b2e4937c1e7366c91fc571bbcfd7438a54cda0299ee9f4903d4bc94b7: Status 404 returned error can't find the container with id 3649623b2e4937c1e7366c91fc571bbcfd7438a54cda0299ee9f4903d4bc94b7 Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.654225 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jtchm"] Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.654999 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-jtchm" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerName="dnsmasq-dns" containerID="cri-o://ca313d442ce79b4fe757e4ddc26dd2cd169dbeadedcf59adb7808c14b6e30c30" gracePeriod=10 Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.686094 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lnsfp"] Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.693038 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.706602 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lnsfp"] Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.776245 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.776302 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp49d\" (UniqueName: \"kubernetes.io/projected/33a20faf-8a53-44bb-880e-09eec7ab14b7-kube-api-access-bp49d\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.776344 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-config\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.776392 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.776435 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.877925 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.879222 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.879293 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp49d\" (UniqueName: \"kubernetes.io/projected/33a20faf-8a53-44bb-880e-09eec7ab14b7-kube-api-access-bp49d\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.879352 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-config\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.879412 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.879490 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.880220 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.881166 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-config\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.881244 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:10 crc kubenswrapper[4624]: I0228 03:53:10.902761 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp49d\" (UniqueName: \"kubernetes.io/projected/33a20faf-8a53-44bb-880e-09eec7ab14b7-kube-api-access-bp49d\") pod \"dnsmasq-dns-b8fbc5445-lnsfp\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.103516 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.122137 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jtchm" event={"ID":"10b86025-b1c0-4a98-a9bb-e510b2d14b66","Type":"ContainerDied","Data":"ca313d442ce79b4fe757e4ddc26dd2cd169dbeadedcf59adb7808c14b6e30c30"} Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.122074 4624 generic.go:334] "Generic (PLEG): container finished" podID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerID="ca313d442ce79b4fe757e4ddc26dd2cd169dbeadedcf59adb7808c14b6e30c30" exitCode=0 Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.124537 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5e11975f-5910-43a1-91ed-2633d3576fce","Type":"ContainerStarted","Data":"3649623b2e4937c1e7366c91fc571bbcfd7438a54cda0299ee9f4903d4bc94b7"} Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.231337 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.290227 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-dns-svc\") pod \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.290515 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-config\") pod \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.290548 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kfj\" (UniqueName: \"kubernetes.io/projected/10b86025-b1c0-4a98-a9bb-e510b2d14b66-kube-api-access-j6kfj\") pod \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.290596 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-nb\") pod \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.290631 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-sb\") pod \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\" (UID: \"10b86025-b1c0-4a98-a9bb-e510b2d14b66\") " Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.340193 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b86025-b1c0-4a98-a9bb-e510b2d14b66-kube-api-access-j6kfj" (OuterVolumeSpecName: "kube-api-access-j6kfj") pod "10b86025-b1c0-4a98-a9bb-e510b2d14b66" (UID: "10b86025-b1c0-4a98-a9bb-e510b2d14b66"). InnerVolumeSpecName "kube-api-access-j6kfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.368326 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-config" (OuterVolumeSpecName: "config") pod "10b86025-b1c0-4a98-a9bb-e510b2d14b66" (UID: "10b86025-b1c0-4a98-a9bb-e510b2d14b66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.394033 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.394104 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6kfj\" (UniqueName: \"kubernetes.io/projected/10b86025-b1c0-4a98-a9bb-e510b2d14b66-kube-api-access-j6kfj\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.400818 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "10b86025-b1c0-4a98-a9bb-e510b2d14b66" (UID: "10b86025-b1c0-4a98-a9bb-e510b2d14b66"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.437509 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "10b86025-b1c0-4a98-a9bb-e510b2d14b66" (UID: "10b86025-b1c0-4a98-a9bb-e510b2d14b66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.459778 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "10b86025-b1c0-4a98-a9bb-e510b2d14b66" (UID: "10b86025-b1c0-4a98-a9bb-e510b2d14b66"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.497264 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.497302 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.497313 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10b86025-b1c0-4a98-a9bb-e510b2d14b66-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.589586 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lnsfp"] Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.830001 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 28 03:53:11 crc kubenswrapper[4624]: E0228 03:53:11.830697 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerName="dnsmasq-dns" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.830715 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerName="dnsmasq-dns" Feb 28 03:53:11 crc kubenswrapper[4624]: E0228 03:53:11.830730 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerName="init" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.830739 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerName="init" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.830887 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" containerName="dnsmasq-dns" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.842556 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.856571 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.856734 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.861631 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dt89l" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.861770 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.895921 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.903864 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf446c-fcb0-4f4a-af81-0f64d52669e8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.904033 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.904173 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08cf446c-fcb0-4f4a-af81-0f64d52669e8-cache\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.904320 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.904459 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7qf\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-kube-api-access-rd7qf\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:11 crc kubenswrapper[4624]: I0228 03:53:11.904641 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/08cf446c-fcb0-4f4a-af81-0f64d52669e8-lock\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006296 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08cf446c-fcb0-4f4a-af81-0f64d52669e8-cache\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006336 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006366 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7qf\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-kube-api-access-rd7qf\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006401 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/08cf446c-fcb0-4f4a-af81-0f64d52669e8-lock\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006460 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf446c-fcb0-4f4a-af81-0f64d52669e8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006478 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.006921 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: E0228 03:53:12.007421 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 03:53:12 crc kubenswrapper[4624]: E0228 03:53:12.007450 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 03:53:12 crc kubenswrapper[4624]: E0228 03:53:12.007513 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift podName:08cf446c-fcb0-4f4a-af81-0f64d52669e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:53:12.507487326 +0000 UTC m=+1047.171526635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift") pod "swift-storage-0" (UID: "08cf446c-fcb0-4f4a-af81-0f64d52669e8") : configmap "swift-ring-files" not found Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.008237 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08cf446c-fcb0-4f4a-af81-0f64d52669e8-cache\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.008460 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/08cf446c-fcb0-4f4a-af81-0f64d52669e8-lock\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.013580 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf446c-fcb0-4f4a-af81-0f64d52669e8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.025832 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7qf\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-kube-api-access-rd7qf\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.042556 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.140385 4624 generic.go:334] "Generic (PLEG): container finished" podID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerID="1a2057b2090928112581e495256f6c5c9ef1b6a0883efc0f3ac32e5f1628785e" exitCode=0 Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.140506 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" event={"ID":"33a20faf-8a53-44bb-880e-09eec7ab14b7","Type":"ContainerDied","Data":"1a2057b2090928112581e495256f6c5c9ef1b6a0883efc0f3ac32e5f1628785e"} Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.140903 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" event={"ID":"33a20faf-8a53-44bb-880e-09eec7ab14b7","Type":"ContainerStarted","Data":"39dce93ae743644359c092be1d71e4e755bcc00ea8bd4fd38528448c976830c4"} Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.143486 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jtchm" event={"ID":"10b86025-b1c0-4a98-a9bb-e510b2d14b66","Type":"ContainerDied","Data":"bd676611a1805fe88219683aa1f0e8d011c44c9225fd9d2d3e342ffd06623694"} Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.143588 4624 scope.go:117] "RemoveContainer" containerID="ca313d442ce79b4fe757e4ddc26dd2cd169dbeadedcf59adb7808c14b6e30c30" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.143554 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jtchm" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.189239 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jtchm"] Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.196393 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jtchm"] Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.396690 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gfd7z"] Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.407030 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.409593 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.411985 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gfd7z"] Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.412420 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.412679 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518376 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-etc-swift\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518461 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-swiftconf\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518484 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-dispersionconf\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518510 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-scripts\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518534 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-ring-data-devices\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518689 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqlr\" (UniqueName: \"kubernetes.io/projected/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-kube-api-access-pcqlr\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.518814 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.519203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-combined-ca-bundle\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: E0228 03:53:12.519235 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 03:53:12 crc kubenswrapper[4624]: E0228 03:53:12.519540 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 03:53:12 crc kubenswrapper[4624]: E0228 03:53:12.519671 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift podName:08cf446c-fcb0-4f4a-af81-0f64d52669e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:53:13.51964283 +0000 UTC m=+1048.183682139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift") pod "swift-storage-0" (UID: "08cf446c-fcb0-4f4a-af81-0f64d52669e8") : configmap "swift-ring-files" not found Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.599864 4624 scope.go:117] "RemoveContainer" containerID="6ef60551712aa7da9b2e56bdd6a5ff14d084be850bc0cfb065658a9d74051d23" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621104 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-combined-ca-bundle\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621167 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-etc-swift\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621233 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-swiftconf\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621263 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-dispersionconf\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621292 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-scripts\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621346 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-ring-data-devices\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.621376 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqlr\" (UniqueName: \"kubernetes.io/projected/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-kube-api-access-pcqlr\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.623176 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-etc-swift\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.623736 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-scripts\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.624723 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-ring-data-devices\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.626599 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-swiftconf\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.627181 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-combined-ca-bundle\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.627428 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-dispersionconf\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.640608 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqlr\" (UniqueName: \"kubernetes.io/projected/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-kube-api-access-pcqlr\") pod \"swift-ring-rebalance-gfd7z\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:12 crc kubenswrapper[4624]: I0228 03:53:12.755354 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:13 crc kubenswrapper[4624]: I0228 03:53:13.187104 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gfd7z"] Feb 28 03:53:13 crc kubenswrapper[4624]: I0228 03:53:13.202327 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5e11975f-5910-43a1-91ed-2633d3576fce","Type":"ContainerStarted","Data":"df451117c784187945c612fee916f8793dd5958fda57ea90a4aea4d7347c6cd3"} Feb 28 03:53:13 crc kubenswrapper[4624]: I0228 03:53:13.205345 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" event={"ID":"33a20faf-8a53-44bb-880e-09eec7ab14b7","Type":"ContainerStarted","Data":"956dd0d948d1393de5e82cbbe3d1fec7b8db9b6929e07b8b407a4fc2afb034be"} Feb 28 03:53:13 crc kubenswrapper[4624]: I0228 03:53:13.206217 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:13 crc kubenswrapper[4624]: I0228 03:53:13.232485 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" podStartSLOduration=3.232460159 podStartE2EDuration="3.232460159s" podCreationTimestamp="2026-02-28 03:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:13.225827748 +0000 UTC m=+1047.889867057" watchObservedRunningTime="2026-02-28 03:53:13.232460159 +0000 UTC m=+1047.896499468" Feb 28 03:53:13 crc kubenswrapper[4624]: I0228 03:53:13.587471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:13 crc kubenswrapper[4624]: E0228 03:53:13.587713 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 03:53:13 crc kubenswrapper[4624]: E0228 03:53:13.587734 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 03:53:13 crc kubenswrapper[4624]: E0228 03:53:13.587803 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift podName:08cf446c-fcb0-4f4a-af81-0f64d52669e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:53:15.587781459 +0000 UTC m=+1050.251820768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift") pod "swift-storage-0" (UID: "08cf446c-fcb0-4f4a-af81-0f64d52669e8") : configmap "swift-ring-files" not found Feb 28 03:53:14 crc kubenswrapper[4624]: I0228 03:53:14.105526 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b86025-b1c0-4a98-a9bb-e510b2d14b66" path="/var/lib/kubelet/pods/10b86025-b1c0-4a98-a9bb-e510b2d14b66/volumes" Feb 28 03:53:14 crc kubenswrapper[4624]: I0228 03:53:14.218640 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5e11975f-5910-43a1-91ed-2633d3576fce","Type":"ContainerStarted","Data":"45f33341df448f2c1a78013ee111a05157f6d0eee35893e541d5678848e1ed2a"} Feb 28 03:53:14 crc kubenswrapper[4624]: I0228 03:53:14.218781 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 28 03:53:14 crc kubenswrapper[4624]: I0228 03:53:14.225169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gfd7z" event={"ID":"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33","Type":"ContainerStarted","Data":"b995b293b48517fc2c24da66892b842ad17203b3b9f59ddc41e58c64d5e8a5dc"} Feb 28 03:53:14 crc kubenswrapper[4624]: I0228 03:53:14.252333 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.7462641740000002 podStartE2EDuration="5.252305047s" podCreationTimestamp="2026-02-28 03:53:09 +0000 UTC" firstStartedPulling="2026-02-28 03:53:10.15906318 +0000 UTC m=+1044.823102489" lastFinishedPulling="2026-02-28 03:53:12.665104043 +0000 UTC m=+1047.329143362" observedRunningTime="2026-02-28 03:53:14.246452507 +0000 UTC m=+1048.910491916" watchObservedRunningTime="2026-02-28 03:53:14.252305047 +0000 UTC m=+1048.916344366" Feb 28 03:53:15 crc kubenswrapper[4624]: I0228 03:53:15.636766 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:15 crc kubenswrapper[4624]: E0228 03:53:15.637483 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 03:53:15 crc kubenswrapper[4624]: E0228 03:53:15.637504 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 03:53:15 crc kubenswrapper[4624]: E0228 03:53:15.637574 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift podName:08cf446c-fcb0-4f4a-af81-0f64d52669e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:53:19.637549919 +0000 UTC m=+1054.301589238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift") pod "swift-storage-0" (UID: "08cf446c-fcb0-4f4a-af81-0f64d52669e8") : configmap "swift-ring-files" not found Feb 28 03:53:16 crc kubenswrapper[4624]: I0228 03:53:16.852839 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 28 03:53:16 crc kubenswrapper[4624]: I0228 03:53:16.852917 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 28 03:53:17 crc kubenswrapper[4624]: I0228 03:53:17.057136 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 28 03:53:17 crc kubenswrapper[4624]: I0228 03:53:17.353607 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 28 03:53:17 crc kubenswrapper[4624]: I0228 03:53:17.505128 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 28 03:53:17 crc kubenswrapper[4624]: I0228 03:53:17.505201 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 28 03:53:17 crc kubenswrapper[4624]: I0228 03:53:17.594051 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.263605 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gfd7z" event={"ID":"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33","Type":"ContainerStarted","Data":"813255ae47c9bdd6c17541924db968d3c0f9bbf98e6085aad31eb05daaa9a31d"} Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.296779 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gfd7z" podStartSLOduration=2.397296586 podStartE2EDuration="6.296752726s" podCreationTimestamp="2026-02-28 03:53:12 +0000 UTC" firstStartedPulling="2026-02-28 03:53:13.249687471 +0000 UTC m=+1047.913726780" lastFinishedPulling="2026-02-28 03:53:17.149143611 +0000 UTC m=+1051.813182920" observedRunningTime="2026-02-28 03:53:18.292206321 +0000 UTC m=+1052.956245630" watchObservedRunningTime="2026-02-28 03:53:18.296752726 +0000 UTC m=+1052.960792075" Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.369422 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.914363 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-skbgt"] Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.915995 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.946478 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-skbgt"] Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.957962 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d1b-account-create-update-2v8tl"] Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.959355 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.965681 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 28 03:53:18 crc kubenswrapper[4624]: I0228 03:53:18.986063 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d1b-account-create-update-2v8tl"] Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.014862 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5295r\" (UniqueName: \"kubernetes.io/projected/dd94e772-912b-4331-8658-184ae20ef60b-kube-api-access-5295r\") pod \"keystone-db-create-skbgt\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.014956 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85bmt\" (UniqueName: \"kubernetes.io/projected/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-kube-api-access-85bmt\") pod \"keystone-7d1b-account-create-update-2v8tl\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.015314 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd94e772-912b-4331-8658-184ae20ef60b-operator-scripts\") pod \"keystone-db-create-skbgt\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.015373 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-operator-scripts\") pod \"keystone-7d1b-account-create-update-2v8tl\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.112608 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-m48t5"] Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.114158 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.117076 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5295r\" (UniqueName: \"kubernetes.io/projected/dd94e772-912b-4331-8658-184ae20ef60b-kube-api-access-5295r\") pod \"keystone-db-create-skbgt\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.117163 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85bmt\" (UniqueName: \"kubernetes.io/projected/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-kube-api-access-85bmt\") pod \"keystone-7d1b-account-create-update-2v8tl\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.117221 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd94e772-912b-4331-8658-184ae20ef60b-operator-scripts\") pod \"keystone-db-create-skbgt\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.117242 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-operator-scripts\") pod \"keystone-7d1b-account-create-update-2v8tl\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.118163 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd94e772-912b-4331-8658-184ae20ef60b-operator-scripts\") pod \"keystone-db-create-skbgt\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.118832 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-operator-scripts\") pod \"keystone-7d1b-account-create-update-2v8tl\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.142926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5295r\" (UniqueName: \"kubernetes.io/projected/dd94e772-912b-4331-8658-184ae20ef60b-kube-api-access-5295r\") pod \"keystone-db-create-skbgt\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.146422 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85bmt\" (UniqueName: \"kubernetes.io/projected/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-kube-api-access-85bmt\") pod \"keystone-7d1b-account-create-update-2v8tl\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.201653 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-m48t5"] Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.220179 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcnfl\" (UniqueName: \"kubernetes.io/projected/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-kube-api-access-hcnfl\") pod \"placement-db-create-m48t5\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.220310 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-operator-scripts\") pod \"placement-db-create-m48t5\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.247286 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.250235 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-865f-account-create-update-qf8xq"] Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.251928 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.256100 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.257200 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-865f-account-create-update-qf8xq"] Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.291493 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.321904 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-operator-scripts\") pod \"placement-db-create-m48t5\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.322557 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcnfl\" (UniqueName: \"kubernetes.io/projected/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-kube-api-access-hcnfl\") pod \"placement-db-create-m48t5\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.322605 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cpd\" (UniqueName: \"kubernetes.io/projected/c120112d-dd0a-45a5-9cc8-90829cd3b434-kube-api-access-j8cpd\") pod \"placement-865f-account-create-update-qf8xq\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.322703 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c120112d-dd0a-45a5-9cc8-90829cd3b434-operator-scripts\") pod \"placement-865f-account-create-update-qf8xq\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.325393 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-operator-scripts\") pod \"placement-db-create-m48t5\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.343343 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcnfl\" (UniqueName: \"kubernetes.io/projected/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-kube-api-access-hcnfl\") pod \"placement-db-create-m48t5\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.425235 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8cpd\" (UniqueName: \"kubernetes.io/projected/c120112d-dd0a-45a5-9cc8-90829cd3b434-kube-api-access-j8cpd\") pod \"placement-865f-account-create-update-qf8xq\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.425653 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c120112d-dd0a-45a5-9cc8-90829cd3b434-operator-scripts\") pod \"placement-865f-account-create-update-qf8xq\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.427248 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c120112d-dd0a-45a5-9cc8-90829cd3b434-operator-scripts\") pod \"placement-865f-account-create-update-qf8xq\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.446693 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m48t5" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.452731 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8cpd\" (UniqueName: \"kubernetes.io/projected/c120112d-dd0a-45a5-9cc8-90829cd3b434-kube-api-access-j8cpd\") pod \"placement-865f-account-create-update-qf8xq\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.655448 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:19 crc kubenswrapper[4624]: E0228 03:53:19.732017 4624 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 03:53:19 crc kubenswrapper[4624]: E0228 03:53:19.732055 4624 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 03:53:19 crc kubenswrapper[4624]: E0228 03:53:19.732144 4624 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift podName:08cf446c-fcb0-4f4a-af81-0f64d52669e8 nodeName:}" failed. No retries permitted until 2026-02-28 03:53:27.732119141 +0000 UTC m=+1062.396158450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift") pod "swift-storage-0" (UID: "08cf446c-fcb0-4f4a-af81-0f64d52669e8") : configmap "swift-ring-files" not found Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.731832 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.763593 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-skbgt"] Feb 28 03:53:19 crc kubenswrapper[4624]: W0228 03:53:19.825447 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd94e772_912b_4331_8658_184ae20ef60b.slice/crio-a35170f3f4a93fc572af3eff82002adc7266c49d1c90874f35f60b33c4cf9e57 WatchSource:0}: Error finding container a35170f3f4a93fc572af3eff82002adc7266c49d1c90874f35f60b33c4cf9e57: Status 404 returned error can't find the container with id a35170f3f4a93fc572af3eff82002adc7266c49d1c90874f35f60b33c4cf9e57 Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.874848 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d1b-account-create-update-2v8tl"] Feb 28 03:53:19 crc kubenswrapper[4624]: W0228 03:53:19.900066 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1921f1f5_2dcd_4574_b20c_c2a3c4b55cf3.slice/crio-61d9037411fd9c962bc03158dc4dc60a5dc0b63367136c9f280f0f163d1f0531 WatchSource:0}: Error finding container 61d9037411fd9c962bc03158dc4dc60a5dc0b63367136c9f280f0f163d1f0531: Status 404 returned error can't find the container with id 61d9037411fd9c962bc03158dc4dc60a5dc0b63367136c9f280f0f163d1f0531 Feb 28 03:53:19 crc kubenswrapper[4624]: I0228 03:53:19.947217 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-m48t5"] Feb 28 03:53:19 crc kubenswrapper[4624]: W0228 03:53:19.968973 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f97bebd_cc2b_4587_89d3_6f7c0f463c57.slice/crio-92c214a33b67a71c9e568aa51cfa7aeda9c45f227f0d14df7c9370b252f77f36 WatchSource:0}: Error finding container 92c214a33b67a71c9e568aa51cfa7aeda9c45f227f0d14df7c9370b252f77f36: Status 404 returned error can't find the container with id 92c214a33b67a71c9e568aa51cfa7aeda9c45f227f0d14df7c9370b252f77f36 Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.280192 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-865f-account-create-update-qf8xq"] Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.288624 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-skbgt" event={"ID":"dd94e772-912b-4331-8658-184ae20ef60b","Type":"ContainerStarted","Data":"ed0a55553f3c7cdce2f0c3a52db41adef65adf38737cf21e64c38d8bcdd1f212"} Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.288708 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-skbgt" event={"ID":"dd94e772-912b-4331-8658-184ae20ef60b","Type":"ContainerStarted","Data":"a35170f3f4a93fc572af3eff82002adc7266c49d1c90874f35f60b33c4cf9e57"} Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.289974 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m48t5" event={"ID":"1f97bebd-cc2b-4587-89d3-6f7c0f463c57","Type":"ContainerStarted","Data":"3446d5d34d71b6cfb9e5e2bfff7b699988b67697ac423f95a32a2d008ee10658"} Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.290025 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m48t5" event={"ID":"1f97bebd-cc2b-4587-89d3-6f7c0f463c57","Type":"ContainerStarted","Data":"92c214a33b67a71c9e568aa51cfa7aeda9c45f227f0d14df7c9370b252f77f36"} Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.296342 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d1b-account-create-update-2v8tl" event={"ID":"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3","Type":"ContainerStarted","Data":"780a6a671c940e1f81fb13d333cfb796167a09f1c8228c2ced8dfd4747c9f664"} Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.296369 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d1b-account-create-update-2v8tl" event={"ID":"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3","Type":"ContainerStarted","Data":"61d9037411fd9c962bc03158dc4dc60a5dc0b63367136c9f280f0f163d1f0531"} Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.326828 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-skbgt" podStartSLOduration=2.326807165 podStartE2EDuration="2.326807165s" podCreationTimestamp="2026-02-28 03:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:20.316054331 +0000 UTC m=+1054.980093640" watchObservedRunningTime="2026-02-28 03:53:20.326807165 +0000 UTC m=+1054.990846474" Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.392632 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-m48t5" podStartSLOduration=1.392607726 podStartE2EDuration="1.392607726s" podCreationTimestamp="2026-02-28 03:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:20.389148052 +0000 UTC m=+1055.053187361" watchObservedRunningTime="2026-02-28 03:53:20.392607726 +0000 UTC m=+1055.056647035" Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.395334 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d1b-account-create-update-2v8tl" podStartSLOduration=2.3953274909999998 podStartE2EDuration="2.395327491s" podCreationTimestamp="2026-02-28 03:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:20.366504243 +0000 UTC m=+1055.030543552" watchObservedRunningTime="2026-02-28 03:53:20.395327491 +0000 UTC m=+1055.059366800" Feb 28 03:53:20 crc kubenswrapper[4624]: I0228 03:53:20.434330 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.106529 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.173699 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-vmvq9"] Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.174007 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerName="dnsmasq-dns" containerID="cri-o://960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d" gracePeriod=10 Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.314416 4624 generic.go:334] "Generic (PLEG): container finished" podID="1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" containerID="780a6a671c940e1f81fb13d333cfb796167a09f1c8228c2ced8dfd4747c9f664" exitCode=0 Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.314490 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d1b-account-create-update-2v8tl" event={"ID":"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3","Type":"ContainerDied","Data":"780a6a671c940e1f81fb13d333cfb796167a09f1c8228c2ced8dfd4747c9f664"} Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.321346 4624 generic.go:334] "Generic (PLEG): container finished" podID="dd94e772-912b-4331-8658-184ae20ef60b" containerID="ed0a55553f3c7cdce2f0c3a52db41adef65adf38737cf21e64c38d8bcdd1f212" exitCode=0 Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.321417 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-skbgt" event={"ID":"dd94e772-912b-4331-8658-184ae20ef60b","Type":"ContainerDied","Data":"ed0a55553f3c7cdce2f0c3a52db41adef65adf38737cf21e64c38d8bcdd1f212"} Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.329735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865f-account-create-update-qf8xq" event={"ID":"c120112d-dd0a-45a5-9cc8-90829cd3b434","Type":"ContainerStarted","Data":"762a099e8b7e18c8c3787d6e8c5626c7d8012208125ba2b9927617a51234b0fc"} Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.329807 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865f-account-create-update-qf8xq" event={"ID":"c120112d-dd0a-45a5-9cc8-90829cd3b434","Type":"ContainerStarted","Data":"2452b905c6a6f7ccf35eb80f7f92439b086b07f16410e0faef7a7d3b2bb983e7"} Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.332681 4624 generic.go:334] "Generic (PLEG): container finished" podID="1f97bebd-cc2b-4587-89d3-6f7c0f463c57" containerID="3446d5d34d71b6cfb9e5e2bfff7b699988b67697ac423f95a32a2d008ee10658" exitCode=0 Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.332726 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m48t5" event={"ID":"1f97bebd-cc2b-4587-89d3-6f7c0f463c57","Type":"ContainerDied","Data":"3446d5d34d71b6cfb9e5e2bfff7b699988b67697ac423f95a32a2d008ee10658"} Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.786521 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.976584 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-dns-svc\") pod \"fda63c97-ade0-4543-b58a-16d10e3e89b6\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.976832 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hws\" (UniqueName: \"kubernetes.io/projected/fda63c97-ade0-4543-b58a-16d10e3e89b6-kube-api-access-b9hws\") pod \"fda63c97-ade0-4543-b58a-16d10e3e89b6\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.976880 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-ovsdbserver-sb\") pod \"fda63c97-ade0-4543-b58a-16d10e3e89b6\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.977074 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-config\") pod \"fda63c97-ade0-4543-b58a-16d10e3e89b6\" (UID: \"fda63c97-ade0-4543-b58a-16d10e3e89b6\") " Feb 28 03:53:21 crc kubenswrapper[4624]: I0228 03:53:21.985359 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda63c97-ade0-4543-b58a-16d10e3e89b6-kube-api-access-b9hws" (OuterVolumeSpecName: "kube-api-access-b9hws") pod "fda63c97-ade0-4543-b58a-16d10e3e89b6" (UID: "fda63c97-ade0-4543-b58a-16d10e3e89b6"). InnerVolumeSpecName "kube-api-access-b9hws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.014654 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fda63c97-ade0-4543-b58a-16d10e3e89b6" (UID: "fda63c97-ade0-4543-b58a-16d10e3e89b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.018685 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fda63c97-ade0-4543-b58a-16d10e3e89b6" (UID: "fda63c97-ade0-4543-b58a-16d10e3e89b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.025077 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-config" (OuterVolumeSpecName: "config") pod "fda63c97-ade0-4543-b58a-16d10e3e89b6" (UID: "fda63c97-ade0-4543-b58a-16d10e3e89b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.079401 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9hws\" (UniqueName: \"kubernetes.io/projected/fda63c97-ade0-4543-b58a-16d10e3e89b6-kube-api-access-b9hws\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.079817 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.079926 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.080023 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fda63c97-ade0-4543-b58a-16d10e3e89b6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.371814 4624 generic.go:334] "Generic (PLEG): container finished" podID="c120112d-dd0a-45a5-9cc8-90829cd3b434" containerID="762a099e8b7e18c8c3787d6e8c5626c7d8012208125ba2b9927617a51234b0fc" exitCode=0 Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.371954 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865f-account-create-update-qf8xq" event={"ID":"c120112d-dd0a-45a5-9cc8-90829cd3b434","Type":"ContainerDied","Data":"762a099e8b7e18c8c3787d6e8c5626c7d8012208125ba2b9927617a51234b0fc"} Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.374842 4624 generic.go:334] "Generic (PLEG): container finished" podID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerID="960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d" exitCode=0 Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.374962 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.374998 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" event={"ID":"fda63c97-ade0-4543-b58a-16d10e3e89b6","Type":"ContainerDied","Data":"960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d"} Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.375039 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-vmvq9" event={"ID":"fda63c97-ade0-4543-b58a-16d10e3e89b6","Type":"ContainerDied","Data":"b5f4458dec5a1d8df069975fa5ca506eb974ab40c349398b6cc611e8e9453a28"} Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.375124 4624 scope.go:117] "RemoveContainer" containerID="960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.413317 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-vmvq9"] Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.423125 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-vmvq9"] Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.427945 4624 scope.go:117] "RemoveContainer" containerID="1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.452767 4624 scope.go:117] "RemoveContainer" containerID="960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d" Feb 28 03:53:22 crc kubenswrapper[4624]: E0228 03:53:22.456436 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d\": container with ID starting with 960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d not found: ID does not exist" containerID="960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.456477 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d"} err="failed to get container status \"960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d\": rpc error: code = NotFound desc = could not find container \"960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d\": container with ID starting with 960d56a8949738138755059952a544ee350151bb2f50b6748029670af360349d not found: ID does not exist" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.456507 4624 scope.go:117] "RemoveContainer" containerID="1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0" Feb 28 03:53:22 crc kubenswrapper[4624]: E0228 03:53:22.457115 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0\": container with ID starting with 1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0 not found: ID does not exist" containerID="1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.457138 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0"} err="failed to get container status \"1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0\": rpc error: code = NotFound desc = could not find container \"1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0\": container with ID starting with 1109a58e75c741765a9be71de7878128729ae6ac9b5ff63539366c150cb36be0 not found: ID does not exist" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.828978 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.936315 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c120112d-dd0a-45a5-9cc8-90829cd3b434-operator-scripts\") pod \"c120112d-dd0a-45a5-9cc8-90829cd3b434\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.936534 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8cpd\" (UniqueName: \"kubernetes.io/projected/c120112d-dd0a-45a5-9cc8-90829cd3b434-kube-api-access-j8cpd\") pod \"c120112d-dd0a-45a5-9cc8-90829cd3b434\" (UID: \"c120112d-dd0a-45a5-9cc8-90829cd3b434\") " Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.937492 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c120112d-dd0a-45a5-9cc8-90829cd3b434-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c120112d-dd0a-45a5-9cc8-90829cd3b434" (UID: "c120112d-dd0a-45a5-9cc8-90829cd3b434"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.937648 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c120112d-dd0a-45a5-9cc8-90829cd3b434-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:22 crc kubenswrapper[4624]: I0228 03:53:22.944631 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c120112d-dd0a-45a5-9cc8-90829cd3b434-kube-api-access-j8cpd" (OuterVolumeSpecName: "kube-api-access-j8cpd") pod "c120112d-dd0a-45a5-9cc8-90829cd3b434" (UID: "c120112d-dd0a-45a5-9cc8-90829cd3b434"). InnerVolumeSpecName "kube-api-access-j8cpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.000621 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m48t5" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.006902 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.011869 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.054956 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcnfl\" (UniqueName: \"kubernetes.io/projected/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-kube-api-access-hcnfl\") pod \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.055260 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-operator-scripts\") pod \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\" (UID: \"1f97bebd-cc2b-4587-89d3-6f7c0f463c57\") " Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.056008 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f97bebd-cc2b-4587-89d3-6f7c0f463c57" (UID: "1f97bebd-cc2b-4587-89d3-6f7c0f463c57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.056664 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8cpd\" (UniqueName: \"kubernetes.io/projected/c120112d-dd0a-45a5-9cc8-90829cd3b434-kube-api-access-j8cpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.056706 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.068875 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-kube-api-access-hcnfl" (OuterVolumeSpecName: "kube-api-access-hcnfl") pod "1f97bebd-cc2b-4587-89d3-6f7c0f463c57" (UID: "1f97bebd-cc2b-4587-89d3-6f7c0f463c57"). InnerVolumeSpecName "kube-api-access-hcnfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.157538 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85bmt\" (UniqueName: \"kubernetes.io/projected/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-kube-api-access-85bmt\") pod \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.157597 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-operator-scripts\") pod \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\" (UID: \"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3\") " Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.157632 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd94e772-912b-4331-8658-184ae20ef60b-operator-scripts\") pod \"dd94e772-912b-4331-8658-184ae20ef60b\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.157721 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5295r\" (UniqueName: \"kubernetes.io/projected/dd94e772-912b-4331-8658-184ae20ef60b-kube-api-access-5295r\") pod \"dd94e772-912b-4331-8658-184ae20ef60b\" (UID: \"dd94e772-912b-4331-8658-184ae20ef60b\") " Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.158162 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcnfl\" (UniqueName: \"kubernetes.io/projected/1f97bebd-cc2b-4587-89d3-6f7c0f463c57-kube-api-access-hcnfl\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.158726 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd94e772-912b-4331-8658-184ae20ef60b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd94e772-912b-4331-8658-184ae20ef60b" (UID: "dd94e772-912b-4331-8658-184ae20ef60b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.159207 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" (UID: "1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.162762 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-kube-api-access-85bmt" (OuterVolumeSpecName: "kube-api-access-85bmt") pod "1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" (UID: "1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3"). InnerVolumeSpecName "kube-api-access-85bmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.163457 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd94e772-912b-4331-8658-184ae20ef60b-kube-api-access-5295r" (OuterVolumeSpecName: "kube-api-access-5295r") pod "dd94e772-912b-4331-8658-184ae20ef60b" (UID: "dd94e772-912b-4331-8658-184ae20ef60b"). InnerVolumeSpecName "kube-api-access-5295r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.262599 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5295r\" (UniqueName: \"kubernetes.io/projected/dd94e772-912b-4331-8658-184ae20ef60b-kube-api-access-5295r\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.262961 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85bmt\" (UniqueName: \"kubernetes.io/projected/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-kube-api-access-85bmt\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.263284 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.263440 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd94e772-912b-4331-8658-184ae20ef60b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.298818 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7pjw8"] Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.299616 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerName="init" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.299663 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerName="init" Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.299698 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" containerName="mariadb-account-create-update" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.299714 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" containerName="mariadb-account-create-update" Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.299786 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c120112d-dd0a-45a5-9cc8-90829cd3b434" containerName="mariadb-account-create-update" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.299801 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c120112d-dd0a-45a5-9cc8-90829cd3b434" containerName="mariadb-account-create-update" Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.299846 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f97bebd-cc2b-4587-89d3-6f7c0f463c57" containerName="mariadb-database-create" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.299859 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f97bebd-cc2b-4587-89d3-6f7c0f463c57" containerName="mariadb-database-create" Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.299898 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd94e772-912b-4331-8658-184ae20ef60b" containerName="mariadb-database-create" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.299912 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd94e772-912b-4331-8658-184ae20ef60b" containerName="mariadb-database-create" Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.299946 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerName="dnsmasq-dns" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.299962 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerName="dnsmasq-dns" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.300339 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd94e772-912b-4331-8658-184ae20ef60b" containerName="mariadb-database-create" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.300369 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" containerName="dnsmasq-dns" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.300387 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f97bebd-cc2b-4587-89d3-6f7c0f463c57" containerName="mariadb-database-create" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.300419 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" containerName="mariadb-account-create-update" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.300442 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c120112d-dd0a-45a5-9cc8-90829cd3b434" containerName="mariadb-account-create-update" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.301736 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.311921 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7pjw8"] Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.365040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc2j5\" (UniqueName: \"kubernetes.io/projected/098b4982-cc16-4f80-97f7-fe2a7e29ec02-kube-api-access-gc2j5\") pod \"glance-db-create-7pjw8\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.365822 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/098b4982-cc16-4f80-97f7-fe2a7e29ec02-operator-scripts\") pod \"glance-db-create-7pjw8\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.398738 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-skbgt" event={"ID":"dd94e772-912b-4331-8658-184ae20ef60b","Type":"ContainerDied","Data":"a35170f3f4a93fc572af3eff82002adc7266c49d1c90874f35f60b33c4cf9e57"} Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.398795 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a35170f3f4a93fc572af3eff82002adc7266c49d1c90874f35f60b33c4cf9e57" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.398876 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-skbgt" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.402368 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-865f-account-create-update-qf8xq" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.402762 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865f-account-create-update-qf8xq" event={"ID":"c120112d-dd0a-45a5-9cc8-90829cd3b434","Type":"ContainerDied","Data":"2452b905c6a6f7ccf35eb80f7f92439b086b07f16410e0faef7a7d3b2bb983e7"} Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.403165 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2452b905c6a6f7ccf35eb80f7f92439b086b07f16410e0faef7a7d3b2bb983e7" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.405235 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m48t5" event={"ID":"1f97bebd-cc2b-4587-89d3-6f7c0f463c57","Type":"ContainerDied","Data":"92c214a33b67a71c9e568aa51cfa7aeda9c45f227f0d14df7c9370b252f77f36"} Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.405288 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c214a33b67a71c9e568aa51cfa7aeda9c45f227f0d14df7c9370b252f77f36" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.405558 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m48t5" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.407076 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6c58-account-create-update-lx7n2"] Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.408691 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d1b-account-create-update-2v8tl" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.408819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d1b-account-create-update-2v8tl" event={"ID":"1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3","Type":"ContainerDied","Data":"61d9037411fd9c962bc03158dc4dc60a5dc0b63367136c9f280f0f163d1f0531"} Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.408846 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d9037411fd9c962bc03158dc4dc60a5dc0b63367136c9f280f0f163d1f0531" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.408933 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.414169 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.436791 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6c58-account-create-update-lx7n2"] Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.467493 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/098b4982-cc16-4f80-97f7-fe2a7e29ec02-operator-scripts\") pod \"glance-db-create-7pjw8\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.467544 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkcfj\" (UniqueName: \"kubernetes.io/projected/015e6767-0363-4b22-83db-95e90db5e386-kube-api-access-hkcfj\") pod \"glance-6c58-account-create-update-lx7n2\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.467599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc2j5\" (UniqueName: \"kubernetes.io/projected/098b4982-cc16-4f80-97f7-fe2a7e29ec02-kube-api-access-gc2j5\") pod \"glance-db-create-7pjw8\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.468455 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/098b4982-cc16-4f80-97f7-fe2a7e29ec02-operator-scripts\") pod \"glance-db-create-7pjw8\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.470045 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015e6767-0363-4b22-83db-95e90db5e386-operator-scripts\") pod \"glance-6c58-account-create-update-lx7n2\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.510768 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc2j5\" (UniqueName: \"kubernetes.io/projected/098b4982-cc16-4f80-97f7-fe2a7e29ec02-kube-api-access-gc2j5\") pod \"glance-db-create-7pjw8\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: E0228 03:53:23.566808 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1921f1f5_2dcd_4574_b20c_c2a3c4b55cf3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc120112d_dd0a_45a5_9cc8_90829cd3b434.slice/crio-2452b905c6a6f7ccf35eb80f7f92439b086b07f16410e0faef7a7d3b2bb983e7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f97bebd_cc2b_4587_89d3_6f7c0f463c57.slice\": RecentStats: unable to find data in memory cache]" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.572265 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015e6767-0363-4b22-83db-95e90db5e386-operator-scripts\") pod \"glance-6c58-account-create-update-lx7n2\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.572382 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkcfj\" (UniqueName: \"kubernetes.io/projected/015e6767-0363-4b22-83db-95e90db5e386-kube-api-access-hkcfj\") pod \"glance-6c58-account-create-update-lx7n2\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.573375 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015e6767-0363-4b22-83db-95e90db5e386-operator-scripts\") pod \"glance-6c58-account-create-update-lx7n2\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.593523 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkcfj\" (UniqueName: \"kubernetes.io/projected/015e6767-0363-4b22-83db-95e90db5e386-kube-api-access-hkcfj\") pod \"glance-6c58-account-create-update-lx7n2\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.620306 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:23 crc kubenswrapper[4624]: I0228 03:53:23.790771 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.106221 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda63c97-ade0-4543-b58a-16d10e3e89b6" path="/var/lib/kubelet/pods/fda63c97-ade0-4543-b58a-16d10e3e89b6/volumes" Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.112345 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7pjw8"] Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.265255 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6c58-account-create-update-lx7n2"] Feb 28 03:53:24 crc kubenswrapper[4624]: W0228 03:53:24.273100 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod015e6767_0363_4b22_83db_95e90db5e386.slice/crio-9e057e78fcd350266a7795ef91adadf5815d745d26a72c400bcc5943fab17753 WatchSource:0}: Error finding container 9e057e78fcd350266a7795ef91adadf5815d745d26a72c400bcc5943fab17753: Status 404 returned error can't find the container with id 9e057e78fcd350266a7795ef91adadf5815d745d26a72c400bcc5943fab17753 Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.425390 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6c58-account-create-update-lx7n2" event={"ID":"015e6767-0363-4b22-83db-95e90db5e386","Type":"ContainerStarted","Data":"9e057e78fcd350266a7795ef91adadf5815d745d26a72c400bcc5943fab17753"} Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.426924 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pjw8" event={"ID":"098b4982-cc16-4f80-97f7-fe2a7e29ec02","Type":"ContainerStarted","Data":"53af4ee73cd670875e9e6cfb1e59885d3f8010c3cb97f5c1231146db7cd97070"} Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.426952 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pjw8" event={"ID":"098b4982-cc16-4f80-97f7-fe2a7e29ec02","Type":"ContainerStarted","Data":"211dd6ce232856edbbb08c22ffa5aab053fb773eb7d95e2d333e40466fae7585"} Feb 28 03:53:24 crc kubenswrapper[4624]: I0228 03:53:24.460369 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-7pjw8" podStartSLOduration=1.460345054 podStartE2EDuration="1.460345054s" podCreationTimestamp="2026-02-28 03:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:24.458338529 +0000 UTC m=+1059.122377838" watchObservedRunningTime="2026-02-28 03:53:24.460345054 +0000 UTC m=+1059.124384363" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.403345 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bmmbr"] Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.405137 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.410395 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.424697 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bmmbr"] Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.437222 4624 generic.go:334] "Generic (PLEG): container finished" podID="098b4982-cc16-4f80-97f7-fe2a7e29ec02" containerID="53af4ee73cd670875e9e6cfb1e59885d3f8010c3cb97f5c1231146db7cd97070" exitCode=0 Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.437300 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pjw8" event={"ID":"098b4982-cc16-4f80-97f7-fe2a7e29ec02","Type":"ContainerDied","Data":"53af4ee73cd670875e9e6cfb1e59885d3f8010c3cb97f5c1231146db7cd97070"} Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.438955 4624 generic.go:334] "Generic (PLEG): container finished" podID="015e6767-0363-4b22-83db-95e90db5e386" containerID="221bc2ae7755b9ed632c86287255a658800fd418dd9882e35dd6aaf7b7306a6e" exitCode=0 Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.439013 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6c58-account-create-update-lx7n2" event={"ID":"015e6767-0363-4b22-83db-95e90db5e386","Type":"ContainerDied","Data":"221bc2ae7755b9ed632c86287255a658800fd418dd9882e35dd6aaf7b7306a6e"} Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.534378 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a274b6-9d76-4b69-9bf9-677152bacfb0-operator-scripts\") pod \"root-account-create-update-bmmbr\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.534423 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56k6j\" (UniqueName: \"kubernetes.io/projected/82a274b6-9d76-4b69-9bf9-677152bacfb0-kube-api-access-56k6j\") pod \"root-account-create-update-bmmbr\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.636829 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a274b6-9d76-4b69-9bf9-677152bacfb0-operator-scripts\") pod \"root-account-create-update-bmmbr\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.636893 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56k6j\" (UniqueName: \"kubernetes.io/projected/82a274b6-9d76-4b69-9bf9-677152bacfb0-kube-api-access-56k6j\") pod \"root-account-create-update-bmmbr\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.640468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a274b6-9d76-4b69-9bf9-677152bacfb0-operator-scripts\") pod \"root-account-create-update-bmmbr\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.673957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56k6j\" (UniqueName: \"kubernetes.io/projected/82a274b6-9d76-4b69-9bf9-677152bacfb0-kube-api-access-56k6j\") pod \"root-account-create-update-bmmbr\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:25 crc kubenswrapper[4624]: I0228 03:53:25.723193 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:26 crc kubenswrapper[4624]: I0228 03:53:26.056178 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bmmbr"] Feb 28 03:53:26 crc kubenswrapper[4624]: I0228 03:53:26.453402 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bmmbr" event={"ID":"82a274b6-9d76-4b69-9bf9-677152bacfb0","Type":"ContainerStarted","Data":"2d9743f89f0f5c87640465c946080642bd0c5cd805b04608a130e5648e4da90d"} Feb 28 03:53:26 crc kubenswrapper[4624]: I0228 03:53:26.453477 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bmmbr" event={"ID":"82a274b6-9d76-4b69-9bf9-677152bacfb0","Type":"ContainerStarted","Data":"39c7431d6e4f4dd57221ef6d1c585f1a6bd9eb6902a8eb18b5c5d879362a0525"} Feb 28 03:53:26 crc kubenswrapper[4624]: I0228 03:53:26.457903 4624 generic.go:334] "Generic (PLEG): container finished" podID="41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" containerID="813255ae47c9bdd6c17541924db968d3c0f9bbf98e6085aad31eb05daaa9a31d" exitCode=0 Feb 28 03:53:26 crc kubenswrapper[4624]: I0228 03:53:26.457988 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gfd7z" event={"ID":"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33","Type":"ContainerDied","Data":"813255ae47c9bdd6c17541924db968d3c0f9bbf98e6085aad31eb05daaa9a31d"} Feb 28 03:53:26 crc kubenswrapper[4624]: I0228 03:53:26.477897 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-bmmbr" podStartSLOduration=1.477877491 podStartE2EDuration="1.477877491s" podCreationTimestamp="2026-02-28 03:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:26.475065514 +0000 UTC m=+1061.139104823" watchObservedRunningTime="2026-02-28 03:53:26.477877491 +0000 UTC m=+1061.141916800" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.178179 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.184276 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.297782 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/098b4982-cc16-4f80-97f7-fe2a7e29ec02-operator-scripts\") pod \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.298183 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkcfj\" (UniqueName: \"kubernetes.io/projected/015e6767-0363-4b22-83db-95e90db5e386-kube-api-access-hkcfj\") pod \"015e6767-0363-4b22-83db-95e90db5e386\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.298250 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc2j5\" (UniqueName: \"kubernetes.io/projected/098b4982-cc16-4f80-97f7-fe2a7e29ec02-kube-api-access-gc2j5\") pod \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\" (UID: \"098b4982-cc16-4f80-97f7-fe2a7e29ec02\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.298296 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015e6767-0363-4b22-83db-95e90db5e386-operator-scripts\") pod \"015e6767-0363-4b22-83db-95e90db5e386\" (UID: \"015e6767-0363-4b22-83db-95e90db5e386\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.299624 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015e6767-0363-4b22-83db-95e90db5e386-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "015e6767-0363-4b22-83db-95e90db5e386" (UID: "015e6767-0363-4b22-83db-95e90db5e386"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.299902 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/098b4982-cc16-4f80-97f7-fe2a7e29ec02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "098b4982-cc16-4f80-97f7-fe2a7e29ec02" (UID: "098b4982-cc16-4f80-97f7-fe2a7e29ec02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.305363 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015e6767-0363-4b22-83db-95e90db5e386-kube-api-access-hkcfj" (OuterVolumeSpecName: "kube-api-access-hkcfj") pod "015e6767-0363-4b22-83db-95e90db5e386" (UID: "015e6767-0363-4b22-83db-95e90db5e386"). InnerVolumeSpecName "kube-api-access-hkcfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.305633 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098b4982-cc16-4f80-97f7-fe2a7e29ec02-kube-api-access-gc2j5" (OuterVolumeSpecName: "kube-api-access-gc2j5") pod "098b4982-cc16-4f80-97f7-fe2a7e29ec02" (UID: "098b4982-cc16-4f80-97f7-fe2a7e29ec02"). InnerVolumeSpecName "kube-api-access-gc2j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.400538 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkcfj\" (UniqueName: \"kubernetes.io/projected/015e6767-0363-4b22-83db-95e90db5e386-kube-api-access-hkcfj\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.400602 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc2j5\" (UniqueName: \"kubernetes.io/projected/098b4982-cc16-4f80-97f7-fe2a7e29ec02-kube-api-access-gc2j5\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.400616 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015e6767-0363-4b22-83db-95e90db5e386-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.400631 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/098b4982-cc16-4f80-97f7-fe2a7e29ec02-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.469519 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7pjw8" event={"ID":"098b4982-cc16-4f80-97f7-fe2a7e29ec02","Type":"ContainerDied","Data":"211dd6ce232856edbbb08c22ffa5aab053fb773eb7d95e2d333e40466fae7585"} Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.469567 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="211dd6ce232856edbbb08c22ffa5aab053fb773eb7d95e2d333e40466fae7585" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.469622 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7pjw8" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.472038 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6c58-account-create-update-lx7n2" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.472131 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6c58-account-create-update-lx7n2" event={"ID":"015e6767-0363-4b22-83db-95e90db5e386","Type":"ContainerDied","Data":"9e057e78fcd350266a7795ef91adadf5815d745d26a72c400bcc5943fab17753"} Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.472177 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e057e78fcd350266a7795ef91adadf5815d745d26a72c400bcc5943fab17753" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.474772 4624 generic.go:334] "Generic (PLEG): container finished" podID="82a274b6-9d76-4b69-9bf9-677152bacfb0" containerID="2d9743f89f0f5c87640465c946080642bd0c5cd805b04608a130e5648e4da90d" exitCode=0 Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.475531 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bmmbr" event={"ID":"82a274b6-9d76-4b69-9bf9-677152bacfb0","Type":"ContainerDied","Data":"2d9743f89f0f5c87640465c946080642bd0c5cd805b04608a130e5648e4da90d"} Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.775974 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815014 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-combined-ca-bundle\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815134 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-dispersionconf\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815182 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-ring-data-devices\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815265 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcqlr\" (UniqueName: \"kubernetes.io/projected/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-kube-api-access-pcqlr\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815289 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-swiftconf\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815328 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-scripts\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815359 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-etc-swift\") pod \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\" (UID: \"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33\") " Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.815531 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.816288 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.825074 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-kube-api-access-pcqlr" (OuterVolumeSpecName: "kube-api-access-pcqlr") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "kube-api-access-pcqlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.827823 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08cf446c-fcb0-4f4a-af81-0f64d52669e8-etc-swift\") pod \"swift-storage-0\" (UID: \"08cf446c-fcb0-4f4a-af81-0f64d52669e8\") " pod="openstack/swift-storage-0" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.829149 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.839226 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.855762 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.858735 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-scripts" (OuterVolumeSpecName: "scripts") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.864324 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" (UID: "41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.917974 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcqlr\" (UniqueName: \"kubernetes.io/projected/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-kube-api-access-pcqlr\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.918019 4624 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.918033 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.918044 4624 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.918055 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.918065 4624 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:27 crc kubenswrapper[4624]: I0228 03:53:27.918076 4624 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.102681 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.485375 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gfd7z" event={"ID":"41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33","Type":"ContainerDied","Data":"b995b293b48517fc2c24da66892b842ad17203b3b9f59ddc41e58c64d5e8a5dc"} Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.485970 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b995b293b48517fc2c24da66892b842ad17203b3b9f59ddc41e58c64d5e8a5dc" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.485436 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gfd7z" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646194 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jmxqw"] Feb 28 03:53:28 crc kubenswrapper[4624]: E0228 03:53:28.646655 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" containerName="swift-ring-rebalance" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646674 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" containerName="swift-ring-rebalance" Feb 28 03:53:28 crc kubenswrapper[4624]: E0228 03:53:28.646689 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015e6767-0363-4b22-83db-95e90db5e386" containerName="mariadb-account-create-update" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646697 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="015e6767-0363-4b22-83db-95e90db5e386" containerName="mariadb-account-create-update" Feb 28 03:53:28 crc kubenswrapper[4624]: E0228 03:53:28.646713 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098b4982-cc16-4f80-97f7-fe2a7e29ec02" containerName="mariadb-database-create" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646719 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="098b4982-cc16-4f80-97f7-fe2a7e29ec02" containerName="mariadb-database-create" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646892 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="015e6767-0363-4b22-83db-95e90db5e386" containerName="mariadb-account-create-update" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646910 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="098b4982-cc16-4f80-97f7-fe2a7e29ec02" containerName="mariadb-database-create" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.646925 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33" containerName="swift-ring-rebalance" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.647598 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.651665 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.661417 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ptb94" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.667737 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jmxqw"] Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.736766 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-combined-ca-bundle\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.736859 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-db-sync-config-data\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.736896 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blt7\" (UniqueName: \"kubernetes.io/projected/e987d56b-dcae-4f73-8e96-9010674f3c4e-kube-api-access-8blt7\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.736933 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-config-data\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.769007 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.838907 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8blt7\" (UniqueName: \"kubernetes.io/projected/e987d56b-dcae-4f73-8e96-9010674f3c4e-kube-api-access-8blt7\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.839000 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-config-data\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.839102 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-combined-ca-bundle\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.839205 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-db-sync-config-data\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.848316 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-db-sync-config-data\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.849571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-config-data\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.861886 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-combined-ca-bundle\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.864920 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8blt7\" (UniqueName: \"kubernetes.io/projected/e987d56b-dcae-4f73-8e96-9010674f3c4e-kube-api-access-8blt7\") pod \"glance-db-sync-jmxqw\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.882744 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.940864 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a274b6-9d76-4b69-9bf9-677152bacfb0-operator-scripts\") pod \"82a274b6-9d76-4b69-9bf9-677152bacfb0\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.941252 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56k6j\" (UniqueName: \"kubernetes.io/projected/82a274b6-9d76-4b69-9bf9-677152bacfb0-kube-api-access-56k6j\") pod \"82a274b6-9d76-4b69-9bf9-677152bacfb0\" (UID: \"82a274b6-9d76-4b69-9bf9-677152bacfb0\") " Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.942447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82a274b6-9d76-4b69-9bf9-677152bacfb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82a274b6-9d76-4b69-9bf9-677152bacfb0" (UID: "82a274b6-9d76-4b69-9bf9-677152bacfb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.945546 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a274b6-9d76-4b69-9bf9-677152bacfb0-kube-api-access-56k6j" (OuterVolumeSpecName: "kube-api-access-56k6j") pod "82a274b6-9d76-4b69-9bf9-677152bacfb0" (UID: "82a274b6-9d76-4b69-9bf9-677152bacfb0"). InnerVolumeSpecName "kube-api-access-56k6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:28 crc kubenswrapper[4624]: I0228 03:53:28.963525 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmxqw" Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.043143 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56k6j\" (UniqueName: \"kubernetes.io/projected/82a274b6-9d76-4b69-9bf9-677152bacfb0-kube-api-access-56k6j\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.043185 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82a274b6-9d76-4b69-9bf9-677152bacfb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.498656 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bmmbr" event={"ID":"82a274b6-9d76-4b69-9bf9-677152bacfb0","Type":"ContainerDied","Data":"39c7431d6e4f4dd57221ef6d1c585f1a6bd9eb6902a8eb18b5c5d879362a0525"} Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.499313 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c7431d6e4f4dd57221ef6d1c585f1a6bd9eb6902a8eb18b5c5d879362a0525" Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.498690 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bmmbr" Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.502231 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"33e80324aecdc45f4869886a08c3544af3ee2a49e442e7d5af9308c300cdcaa2"} Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.545661 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jmxqw"] Feb 28 03:53:29 crc kubenswrapper[4624]: W0228 03:53:29.551226 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode987d56b_dcae_4f73_8e96_9010674f3c4e.slice/crio-0b340ef2159026b325e51f2100b4d95eeea0643ca5aaef65af9cbf48615df78d WatchSource:0}: Error finding container 0b340ef2159026b325e51f2100b4d95eeea0643ca5aaef65af9cbf48615df78d: Status 404 returned error can't find the container with id 0b340ef2159026b325e51f2100b4d95eeea0643ca5aaef65af9cbf48615df78d Feb 28 03:53:29 crc kubenswrapper[4624]: I0228 03:53:29.736208 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.089635 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.122390 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f76ww" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.358970 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-phft7-config-ljt4b"] Feb 28 03:53:30 crc kubenswrapper[4624]: E0228 03:53:30.359928 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a274b6-9d76-4b69-9bf9-677152bacfb0" containerName="mariadb-account-create-update" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.359954 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a274b6-9d76-4b69-9bf9-677152bacfb0" containerName="mariadb-account-create-update" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.360202 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a274b6-9d76-4b69-9bf9-677152bacfb0" containerName="mariadb-account-create-update" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.360911 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.363212 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.374298 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-phft7-config-ljt4b"] Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.497239 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-scripts\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.497304 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-additional-scripts\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.497380 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-log-ovn\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.497412 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.497446 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run-ovn\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.497476 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrgc\" (UniqueName: \"kubernetes.io/projected/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-kube-api-access-rsrgc\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.513155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmxqw" event={"ID":"e987d56b-dcae-4f73-8e96-9010674f3c4e","Type":"ContainerStarted","Data":"0b340ef2159026b325e51f2100b4d95eeea0643ca5aaef65af9cbf48615df78d"} Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.515926 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"6825171b952259c0585115bd8e529c504256a2d2e71a15c6aa7a2768a246f035"} Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.598756 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run-ovn\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.598812 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrgc\" (UniqueName: \"kubernetes.io/projected/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-kube-api-access-rsrgc\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.598900 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-scripts\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.598929 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-additional-scripts\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.598957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-log-ovn\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.598977 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.599696 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.599701 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run-ovn\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.600066 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-log-ovn\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.600414 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-additional-scripts\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.601862 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-scripts\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.620731 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrgc\" (UniqueName: \"kubernetes.io/projected/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-kube-api-access-rsrgc\") pod \"ovn-controller-phft7-config-ljt4b\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:30 crc kubenswrapper[4624]: I0228 03:53:30.727006 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.112171 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bmmbr"] Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.130321 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bmmbr"] Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.270458 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-phft7-config-ljt4b"] Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.535945 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7-config-ljt4b" event={"ID":"d92a7c52-2c3c-4ccb-87b8-8fee4d069135","Type":"ContainerStarted","Data":"44eab515ee105acf94c248ae945e08cfed18063fd404a471a2e2779c92ea4976"} Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.545233 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"bc0fd18939d9905933965f1aa04dabb43df266c9640c38e4ec85bd68e7737021"} Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.545283 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"6c79344216e4422c5b81d433e6d62c96a83c4df1ff0661b2b6870474f59d01bc"} Feb 28 03:53:31 crc kubenswrapper[4624]: I0228 03:53:31.545294 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"8aad61d74adf80435d6bfb8e1dbcb9e2810167d0d50d44f0695fd170ad4043af"} Feb 28 03:53:32 crc kubenswrapper[4624]: I0228 03:53:32.102021 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a274b6-9d76-4b69-9bf9-677152bacfb0" path="/var/lib/kubelet/pods/82a274b6-9d76-4b69-9bf9-677152bacfb0/volumes" Feb 28 03:53:32 crc kubenswrapper[4624]: I0228 03:53:32.562376 4624 generic.go:334] "Generic (PLEG): container finished" podID="d92a7c52-2c3c-4ccb-87b8-8fee4d069135" containerID="64445ae7b528324fab9b7bff71b59add9aa1376f210ed59a5d97a77123c50746" exitCode=0 Feb 28 03:53:32 crc kubenswrapper[4624]: I0228 03:53:32.562433 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7-config-ljt4b" event={"ID":"d92a7c52-2c3c-4ccb-87b8-8fee4d069135","Type":"ContainerDied","Data":"64445ae7b528324fab9b7bff71b59add9aa1376f210ed59a5d97a77123c50746"} Feb 28 03:53:32 crc kubenswrapper[4624]: I0228 03:53:32.570576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"407c4d0d076bf7a46fd9252fb27f1d16d2cd10d51c239989041869f4334a37ee"} Feb 28 03:53:33 crc kubenswrapper[4624]: I0228 03:53:33.587320 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"c21158b5996c4928c0ef9cc5e0f62d9f5fcdb1fe69b31ec07a0c4c6abea079a2"} Feb 28 03:53:33 crc kubenswrapper[4624]: I0228 03:53:33.589176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"3f1277b5f2665142d3cb877fdb1aefa80f2254d3c65d72be340cba9bb3f722ce"} Feb 28 03:53:33 crc kubenswrapper[4624]: I0228 03:53:33.589192 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"23f46b4e0c5459d311a1a7e698e6955bcd77877f4b403d8c5be430b655f4d5cc"} Feb 28 03:53:33 crc kubenswrapper[4624]: I0228 03:53:33.594182 4624 generic.go:334] "Generic (PLEG): container finished" podID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerID="95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5" exitCode=0 Feb 28 03:53:33 crc kubenswrapper[4624]: I0228 03:53:33.594283 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946","Type":"ContainerDied","Data":"95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5"} Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.199893 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.298015 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run\") pod \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.298135 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run" (OuterVolumeSpecName: "var-run") pod "d92a7c52-2c3c-4ccb-87b8-8fee4d069135" (UID: "d92a7c52-2c3c-4ccb-87b8-8fee4d069135"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.298293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrgc\" (UniqueName: \"kubernetes.io/projected/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-kube-api-access-rsrgc\") pod \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.300711 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-log-ovn\") pod \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.300843 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d92a7c52-2c3c-4ccb-87b8-8fee4d069135" (UID: "d92a7c52-2c3c-4ccb-87b8-8fee4d069135"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.300830 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-scripts\") pod \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.300931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-additional-scripts\") pod \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.301005 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run-ovn\") pod \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\" (UID: \"d92a7c52-2c3c-4ccb-87b8-8fee4d069135\") " Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.301865 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d92a7c52-2c3c-4ccb-87b8-8fee4d069135" (UID: "d92a7c52-2c3c-4ccb-87b8-8fee4d069135"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.302013 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d92a7c52-2c3c-4ccb-87b8-8fee4d069135" (UID: "d92a7c52-2c3c-4ccb-87b8-8fee4d069135"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.303543 4624 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.303574 4624 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.303585 4624 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.303595 4624 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.308835 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-kube-api-access-rsrgc" (OuterVolumeSpecName: "kube-api-access-rsrgc") pod "d92a7c52-2c3c-4ccb-87b8-8fee4d069135" (UID: "d92a7c52-2c3c-4ccb-87b8-8fee4d069135"). InnerVolumeSpecName "kube-api-access-rsrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.309502 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-scripts" (OuterVolumeSpecName: "scripts") pod "d92a7c52-2c3c-4ccb-87b8-8fee4d069135" (UID: "d92a7c52-2c3c-4ccb-87b8-8fee4d069135"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.405453 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrgc\" (UniqueName: \"kubernetes.io/projected/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-kube-api-access-rsrgc\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.405490 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d92a7c52-2c3c-4ccb-87b8-8fee4d069135-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.607505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7-config-ljt4b" event={"ID":"d92a7c52-2c3c-4ccb-87b8-8fee4d069135","Type":"ContainerDied","Data":"44eab515ee105acf94c248ae945e08cfed18063fd404a471a2e2779c92ea4976"} Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.607547 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44eab515ee105acf94c248ae945e08cfed18063fd404a471a2e2779c92ea4976" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.607638 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-ljt4b" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.611632 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946","Type":"ContainerStarted","Data":"4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b"} Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.611966 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:53:34 crc kubenswrapper[4624]: I0228 03:53:34.663841 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.088152797 podStartE2EDuration="1m21.663817197s" podCreationTimestamp="2026-02-28 03:52:13 +0000 UTC" firstStartedPulling="2026-02-28 03:52:16.024692122 +0000 UTC m=+990.688731431" lastFinishedPulling="2026-02-28 03:52:55.600356522 +0000 UTC m=+1030.264395831" observedRunningTime="2026-02-28 03:53:34.651713755 +0000 UTC m=+1069.315753064" watchObservedRunningTime="2026-02-28 03:53:34.663817197 +0000 UTC m=+1069.327856506" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.335389 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-phft7-config-ljt4b"] Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.348535 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-phft7-config-ljt4b"] Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.476528 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-phft7-config-tlgfq"] Feb 28 03:53:35 crc kubenswrapper[4624]: E0228 03:53:35.477014 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92a7c52-2c3c-4ccb-87b8-8fee4d069135" containerName="ovn-config" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.477039 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92a7c52-2c3c-4ccb-87b8-8fee4d069135" containerName="ovn-config" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.480297 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92a7c52-2c3c-4ccb-87b8-8fee4d069135" containerName="ovn-config" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.481281 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.484291 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.492536 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-phft7-config-tlgfq"] Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.533236 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.533317 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-scripts\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.533340 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-additional-scripts\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.533374 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-log-ovn\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.533391 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run-ovn\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.533409 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnb6\" (UniqueName: \"kubernetes.io/projected/917e505a-faa3-442e-8e82-cd347f005073-kube-api-access-7bnb6\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634180 4624 generic.go:334] "Generic (PLEG): container finished" podID="4be4f891-f796-4d4b-b916-e669037f474a" containerID="8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0" exitCode=0 Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4be4f891-f796-4d4b-b916-e669037f474a","Type":"ContainerDied","Data":"8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0"} Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634676 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634762 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-scripts\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634783 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-additional-scripts\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634817 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-log-ovn\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634835 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run-ovn\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.634859 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bnb6\" (UniqueName: \"kubernetes.io/projected/917e505a-faa3-442e-8e82-cd347f005073-kube-api-access-7bnb6\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.635831 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-log-ovn\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.636450 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-additional-scripts\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.636630 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.636672 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run-ovn\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.638379 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-scripts\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.653224 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"396287c95c4baab624353aeddd2f1e3a838b91651b37e3477b3a903c1d084f39"} Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.653270 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"58535a48369559b4870032caf38a73a9a6f110bf8b5b74c0dedebc893d22284b"} Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.653281 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"d6e06e7535e8bd73746c2e83349f950ff273581b9c705c1b7b4955dec5165402"} Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.685654 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bnb6\" (UniqueName: \"kubernetes.io/projected/917e505a-faa3-442e-8e82-cd347f005073-kube-api-access-7bnb6\") pod \"ovn-controller-phft7-config-tlgfq\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:35 crc kubenswrapper[4624]: I0228 03:53:35.812606 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.113204 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92a7c52-2c3c-4ccb-87b8-8fee4d069135" path="/var/lib/kubelet/pods/d92a7c52-2c3c-4ccb-87b8-8fee4d069135/volumes" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.140374 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jj9xk"] Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.141598 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.150527 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.233001 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jj9xk"] Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.249566 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlpd\" (UniqueName: \"kubernetes.io/projected/7cb97dc8-c1d9-4d0b-82df-672a9d561356-kube-api-access-mnlpd\") pod \"root-account-create-update-jj9xk\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.249621 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb97dc8-c1d9-4d0b-82df-672a9d561356-operator-scripts\") pod \"root-account-create-update-jj9xk\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.272225 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-phft7-config-tlgfq"] Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.351359 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlpd\" (UniqueName: \"kubernetes.io/projected/7cb97dc8-c1d9-4d0b-82df-672a9d561356-kube-api-access-mnlpd\") pod \"root-account-create-update-jj9xk\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.351422 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb97dc8-c1d9-4d0b-82df-672a9d561356-operator-scripts\") pod \"root-account-create-update-jj9xk\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.356266 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb97dc8-c1d9-4d0b-82df-672a9d561356-operator-scripts\") pod \"root-account-create-update-jj9xk\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.409176 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlpd\" (UniqueName: \"kubernetes.io/projected/7cb97dc8-c1d9-4d0b-82df-672a9d561356-kube-api-access-mnlpd\") pod \"root-account-create-update-jj9xk\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.492375 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.667844 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7-config-tlgfq" event={"ID":"917e505a-faa3-442e-8e82-cd347f005073","Type":"ContainerStarted","Data":"4b56e42847809b5178bfc450d90991eefcf796e6ffa4e1aba968b1cc360a04c4"} Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.676223 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4be4f891-f796-4d4b-b916-e669037f474a","Type":"ContainerStarted","Data":"2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28"} Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.676536 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.713767 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371953.141033 podStartE2EDuration="1m23.71374299s" podCreationTimestamp="2026-02-28 03:52:13 +0000 UTC" firstStartedPulling="2026-02-28 03:52:16.319955712 +0000 UTC m=+990.983995021" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:36.712414694 +0000 UTC m=+1071.376454003" watchObservedRunningTime="2026-02-28 03:53:36.71374299 +0000 UTC m=+1071.377782299" Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.765412 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"5ca1151cc2138e86f481879faa5ef9ff0584c122a45c4902365ec58ca119fba4"} Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.765889 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"dee5f7fd9dcab2500463e4410ef5de351b7d0a94d21b0bdb14c54ca21713975c"} Feb 28 03:53:36 crc kubenswrapper[4624]: I0228 03:53:36.765912 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"9aa6d980ac1307e0b440542a0fb10e18b9c91367dae458b35327de97146e23f9"} Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.152055 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jj9xk"] Feb 28 03:53:37 crc kubenswrapper[4624]: W0228 03:53:37.170922 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cb97dc8_c1d9_4d0b_82df_672a9d561356.slice/crio-50e4391d519369c01d2441d8658912800bef39d5085267365e938841ab24db41 WatchSource:0}: Error finding container 50e4391d519369c01d2441d8658912800bef39d5085267365e938841ab24db41: Status 404 returned error can't find the container with id 50e4391d519369c01d2441d8658912800bef39d5085267365e938841ab24db41 Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.787406 4624 generic.go:334] "Generic (PLEG): container finished" podID="7cb97dc8-c1d9-4d0b-82df-672a9d561356" containerID="d29969dc544e2873b8a9ffc0fef4b068d3a3c84ebb01ebd161c67d0dbe18a33f" exitCode=0 Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.787958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jj9xk" event={"ID":"7cb97dc8-c1d9-4d0b-82df-672a9d561356","Type":"ContainerDied","Data":"d29969dc544e2873b8a9ffc0fef4b068d3a3c84ebb01ebd161c67d0dbe18a33f"} Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.788016 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jj9xk" event={"ID":"7cb97dc8-c1d9-4d0b-82df-672a9d561356","Type":"ContainerStarted","Data":"50e4391d519369c01d2441d8658912800bef39d5085267365e938841ab24db41"} Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.793313 4624 generic.go:334] "Generic (PLEG): container finished" podID="917e505a-faa3-442e-8e82-cd347f005073" containerID="8360966c4c8aad01127f1c7d51a9c5d9a660785d6abffb67a53ac7c0c40ac626" exitCode=0 Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.793431 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7-config-tlgfq" event={"ID":"917e505a-faa3-442e-8e82-cd347f005073","Type":"ContainerDied","Data":"8360966c4c8aad01127f1c7d51a9c5d9a660785d6abffb67a53ac7c0c40ac626"} Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.801650 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08cf446c-fcb0-4f4a-af81-0f64d52669e8","Type":"ContainerStarted","Data":"98d0bee831f06329568080005cd183b36363fa6df23a516ee70f03bc57fc85d7"} Feb 28 03:53:37 crc kubenswrapper[4624]: I0228 03:53:37.897814 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.892191332 podStartE2EDuration="27.897792454s" podCreationTimestamp="2026-02-28 03:53:10 +0000 UTC" firstStartedPulling="2026-02-28 03:53:28.776345549 +0000 UTC m=+1063.440384858" lastFinishedPulling="2026-02-28 03:53:34.781946641 +0000 UTC m=+1069.445985980" observedRunningTime="2026-02-28 03:53:37.895537652 +0000 UTC m=+1072.559576961" watchObservedRunningTime="2026-02-28 03:53:37.897792454 +0000 UTC m=+1072.561831763" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.203972 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4fhlg"] Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.215626 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.218059 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.245179 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4fhlg"] Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.297510 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.297588 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.297660 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.297696 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-config\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.297730 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mh6\" (UniqueName: \"kubernetes.io/projected/fa0cd34b-c733-4703-9728-db6d9c69f888-kube-api-access-88mh6\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.297753 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.399792 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.399880 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-config\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.399919 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mh6\" (UniqueName: \"kubernetes.io/projected/fa0cd34b-c733-4703-9728-db6d9c69f888-kube-api-access-88mh6\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.399943 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.400014 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.400036 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.401042 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.401836 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.402560 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-config\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.403614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.404312 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.425070 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mh6\" (UniqueName: \"kubernetes.io/projected/fa0cd34b-c733-4703-9728-db6d9c69f888-kube-api-access-88mh6\") pod \"dnsmasq-dns-6d5b6d6b67-4fhlg\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:38 crc kubenswrapper[4624]: I0228 03:53:38.534779 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.142853 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4fhlg"] Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.274248 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.322945 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-additional-scripts\") pod \"917e505a-faa3-442e-8e82-cd347f005073\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.323004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-log-ovn\") pod \"917e505a-faa3-442e-8e82-cd347f005073\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.323144 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run\") pod \"917e505a-faa3-442e-8e82-cd347f005073\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.323201 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-scripts\") pod \"917e505a-faa3-442e-8e82-cd347f005073\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.323261 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run-ovn\") pod \"917e505a-faa3-442e-8e82-cd347f005073\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.323293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bnb6\" (UniqueName: \"kubernetes.io/projected/917e505a-faa3-442e-8e82-cd347f005073-kube-api-access-7bnb6\") pod \"917e505a-faa3-442e-8e82-cd347f005073\" (UID: \"917e505a-faa3-442e-8e82-cd347f005073\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.328904 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "917e505a-faa3-442e-8e82-cd347f005073" (UID: "917e505a-faa3-442e-8e82-cd347f005073"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.328959 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "917e505a-faa3-442e-8e82-cd347f005073" (UID: "917e505a-faa3-442e-8e82-cd347f005073"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.328979 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run" (OuterVolumeSpecName: "var-run") pod "917e505a-faa3-442e-8e82-cd347f005073" (UID: "917e505a-faa3-442e-8e82-cd347f005073"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.329257 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "917e505a-faa3-442e-8e82-cd347f005073" (UID: "917e505a-faa3-442e-8e82-cd347f005073"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.329371 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917e505a-faa3-442e-8e82-cd347f005073-kube-api-access-7bnb6" (OuterVolumeSpecName: "kube-api-access-7bnb6") pod "917e505a-faa3-442e-8e82-cd347f005073" (UID: "917e505a-faa3-442e-8e82-cd347f005073"). InnerVolumeSpecName "kube-api-access-7bnb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.329785 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-scripts" (OuterVolumeSpecName: "scripts") pod "917e505a-faa3-442e-8e82-cd347f005073" (UID: "917e505a-faa3-442e-8e82-cd347f005073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.359529 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.424521 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlpd\" (UniqueName: \"kubernetes.io/projected/7cb97dc8-c1d9-4d0b-82df-672a9d561356-kube-api-access-mnlpd\") pod \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.425205 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb97dc8-c1d9-4d0b-82df-672a9d561356-operator-scripts\") pod \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\" (UID: \"7cb97dc8-c1d9-4d0b-82df-672a9d561356\") " Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426062 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426214 4624 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426401 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bnb6\" (UniqueName: \"kubernetes.io/projected/917e505a-faa3-442e-8e82-cd347f005073-kube-api-access-7bnb6\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426474 4624 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/917e505a-faa3-442e-8e82-cd347f005073-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426538 4624 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426727 4624 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/917e505a-faa3-442e-8e82-cd347f005073-var-run\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.426992 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb97dc8-c1d9-4d0b-82df-672a9d561356-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cb97dc8-c1d9-4d0b-82df-672a9d561356" (UID: "7cb97dc8-c1d9-4d0b-82df-672a9d561356"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.431514 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb97dc8-c1d9-4d0b-82df-672a9d561356-kube-api-access-mnlpd" (OuterVolumeSpecName: "kube-api-access-mnlpd") pod "7cb97dc8-c1d9-4d0b-82df-672a9d561356" (UID: "7cb97dc8-c1d9-4d0b-82df-672a9d561356"). InnerVolumeSpecName "kube-api-access-mnlpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.528968 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb97dc8-c1d9-4d0b-82df-672a9d561356-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.529521 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlpd\" (UniqueName: \"kubernetes.io/projected/7cb97dc8-c1d9-4d0b-82df-672a9d561356-kube-api-access-mnlpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.826143 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jj9xk" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.826126 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jj9xk" event={"ID":"7cb97dc8-c1d9-4d0b-82df-672a9d561356","Type":"ContainerDied","Data":"50e4391d519369c01d2441d8658912800bef39d5085267365e938841ab24db41"} Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.826235 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e4391d519369c01d2441d8658912800bef39d5085267365e938841ab24db41" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.828625 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-phft7-config-tlgfq" event={"ID":"917e505a-faa3-442e-8e82-cd347f005073","Type":"ContainerDied","Data":"4b56e42847809b5178bfc450d90991eefcf796e6ffa4e1aba968b1cc360a04c4"} Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.828683 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b56e42847809b5178bfc450d90991eefcf796e6ffa4e1aba968b1cc360a04c4" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.828728 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-phft7-config-tlgfq" Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.830972 4624 generic.go:334] "Generic (PLEG): container finished" podID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerID="9c968899766310fc36273a2661308455478e9e58375c07fb3ca1cf3e5e1068f8" exitCode=0 Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.831014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" event={"ID":"fa0cd34b-c733-4703-9728-db6d9c69f888","Type":"ContainerDied","Data":"9c968899766310fc36273a2661308455478e9e58375c07fb3ca1cf3e5e1068f8"} Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.831033 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" event={"ID":"fa0cd34b-c733-4703-9728-db6d9c69f888","Type":"ContainerStarted","Data":"de9b33658f1aae69ca2ddfcef16c67eec53a38a0fc6a1b5aa7eb5f5af2ff6175"} Feb 28 03:53:39 crc kubenswrapper[4624]: I0228 03:53:39.970757 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-phft7" Feb 28 03:53:40 crc kubenswrapper[4624]: I0228 03:53:40.383812 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-phft7-config-tlgfq"] Feb 28 03:53:40 crc kubenswrapper[4624]: I0228 03:53:40.399061 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-phft7-config-tlgfq"] Feb 28 03:53:42 crc kubenswrapper[4624]: I0228 03:53:42.104706 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917e505a-faa3-442e-8e82-cd347f005073" path="/var/lib/kubelet/pods/917e505a-faa3-442e-8e82-cd347f005073/volumes" Feb 28 03:53:45 crc kubenswrapper[4624]: I0228 03:53:45.206567 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Feb 28 03:53:45 crc kubenswrapper[4624]: I0228 03:53:45.341204 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 28 03:53:49 crc kubenswrapper[4624]: E0228 03:53:49.677026 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 28 03:53:49 crc kubenswrapper[4624]: E0228 03:53:49.678464 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8blt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-jmxqw_openstack(e987d56b-dcae-4f73-8e96-9010674f3c4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:53:49 crc kubenswrapper[4624]: E0228 03:53:49.680946 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-jmxqw" podUID="e987d56b-dcae-4f73-8e96-9010674f3c4e" Feb 28 03:53:49 crc kubenswrapper[4624]: I0228 03:53:49.944150 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" event={"ID":"fa0cd34b-c733-4703-9728-db6d9c69f888","Type":"ContainerStarted","Data":"772af69066771ce1f1d37e96334fecb39c7f80fd096300cb0b8cceef5c5486b2"} Feb 28 03:53:49 crc kubenswrapper[4624]: I0228 03:53:49.944204 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:49 crc kubenswrapper[4624]: E0228 03:53:49.944416 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-jmxqw" podUID="e987d56b-dcae-4f73-8e96-9010674f3c4e" Feb 28 03:53:49 crc kubenswrapper[4624]: I0228 03:53:49.990717 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podStartSLOduration=11.990692965000001 podStartE2EDuration="11.990692965s" podCreationTimestamp="2026-02-28 03:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:49.989623486 +0000 UTC m=+1084.653662795" watchObservedRunningTime="2026-02-28 03:53:49.990692965 +0000 UTC m=+1084.654732274" Feb 28 03:53:55 crc kubenswrapper[4624]: I0228 03:53:55.207380 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:53:55 crc kubenswrapper[4624]: I0228 03:53:55.339370 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 03:53:56 crc kubenswrapper[4624]: I0228 03:53:56.976107 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xgkww"] Feb 28 03:53:56 crc kubenswrapper[4624]: E0228 03:53:56.976834 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917e505a-faa3-442e-8e82-cd347f005073" containerName="ovn-config" Feb 28 03:53:56 crc kubenswrapper[4624]: I0228 03:53:56.976848 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="917e505a-faa3-442e-8e82-cd347f005073" containerName="ovn-config" Feb 28 03:53:56 crc kubenswrapper[4624]: E0228 03:53:56.976866 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb97dc8-c1d9-4d0b-82df-672a9d561356" containerName="mariadb-account-create-update" Feb 28 03:53:56 crc kubenswrapper[4624]: I0228 03:53:56.976873 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb97dc8-c1d9-4d0b-82df-672a9d561356" containerName="mariadb-account-create-update" Feb 28 03:53:56 crc kubenswrapper[4624]: I0228 03:53:56.977047 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="917e505a-faa3-442e-8e82-cd347f005073" containerName="ovn-config" Feb 28 03:53:56 crc kubenswrapper[4624]: I0228 03:53:56.977060 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb97dc8-c1d9-4d0b-82df-672a9d561356" containerName="mariadb-account-create-update" Feb 28 03:53:56 crc kubenswrapper[4624]: I0228 03:53:56.977692 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.028732 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xgkww"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.153870 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k2g\" (UniqueName: \"kubernetes.io/projected/b6787397-0210-4076-90cd-039c9dae6dcb-kube-api-access-n8k2g\") pod \"cinder-db-create-xgkww\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.153951 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6787397-0210-4076-90cd-039c9dae6dcb-operator-scripts\") pod \"cinder-db-create-xgkww\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.195159 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b63c-account-create-update-phz2m"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.196226 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.210664 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.223192 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b63c-account-create-update-phz2m"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.256677 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k2g\" (UniqueName: \"kubernetes.io/projected/b6787397-0210-4076-90cd-039c9dae6dcb-kube-api-access-n8k2g\") pod \"cinder-db-create-xgkww\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.256753 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6787397-0210-4076-90cd-039c9dae6dcb-operator-scripts\") pod \"cinder-db-create-xgkww\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.259135 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6787397-0210-4076-90cd-039c9dae6dcb-operator-scripts\") pod \"cinder-db-create-xgkww\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.308159 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k2g\" (UniqueName: \"kubernetes.io/projected/b6787397-0210-4076-90cd-039c9dae6dcb-kube-api-access-n8k2g\") pod \"cinder-db-create-xgkww\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.358336 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac85b036-a204-4026-841a-e4d3f91841ef-operator-scripts\") pod \"cinder-b63c-account-create-update-phz2m\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.358435 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thb4\" (UniqueName: \"kubernetes.io/projected/ac85b036-a204-4026-841a-e4d3f91841ef-kube-api-access-4thb4\") pod \"cinder-b63c-account-create-update-phz2m\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.459757 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac85b036-a204-4026-841a-e4d3f91841ef-operator-scripts\") pod \"cinder-b63c-account-create-update-phz2m\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.459801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thb4\" (UniqueName: \"kubernetes.io/projected/ac85b036-a204-4026-841a-e4d3f91841ef-kube-api-access-4thb4\") pod \"cinder-b63c-account-create-update-phz2m\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.460922 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac85b036-a204-4026-841a-e4d3f91841ef-operator-scripts\") pod \"cinder-b63c-account-create-update-phz2m\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.506318 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thb4\" (UniqueName: \"kubernetes.io/projected/ac85b036-a204-4026-841a-e4d3f91841ef-kube-api-access-4thb4\") pod \"cinder-b63c-account-create-update-phz2m\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.511309 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.597959 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xgkww" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.603329 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-dhvq6"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.604729 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.619637 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78ef-account-create-update-jgbrw"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.621519 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.625970 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.695630 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8xzgn"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.702048 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.709554 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bkrfd" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.709805 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.709950 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.715475 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.754348 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78ef-account-create-update-jgbrw"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.769277 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mcnvn"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.771891 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v5lr\" (UniqueName: \"kubernetes.io/projected/3dbf943a-6678-4854-8bba-6d319f22b039-kube-api-access-9v5lr\") pod \"barbican-db-create-dhvq6\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.771939 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dbf943a-6678-4854-8bba-6d319f22b039-operator-scripts\") pod \"barbican-db-create-dhvq6\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.771964 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfdda2ff-5aea-4055-9c24-10ae333a0681-operator-scripts\") pod \"neutron-78ef-account-create-update-jgbrw\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.772011 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zd8z\" (UniqueName: \"kubernetes.io/projected/cfdda2ff-5aea-4055-9c24-10ae333a0681-kube-api-access-9zd8z\") pod \"neutron-78ef-account-create-update-jgbrw\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.773246 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.789525 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="c9c8d03c-80e2-42fc-a320-8175c10a59c4" containerName="galera" probeResult="failure" output="command timed out" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.789886 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8xzgn"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.814688 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dhvq6"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.843156 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mcnvn"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873435 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26804f93-99d0-4652-bdc9-10c3283cbd57-operator-scripts\") pod \"neutron-db-create-mcnvn\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873483 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v5lr\" (UniqueName: \"kubernetes.io/projected/3dbf943a-6678-4854-8bba-6d319f22b039-kube-api-access-9v5lr\") pod \"barbican-db-create-dhvq6\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873537 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dbf943a-6678-4854-8bba-6d319f22b039-operator-scripts\") pod \"barbican-db-create-dhvq6\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873561 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfdda2ff-5aea-4055-9c24-10ae333a0681-operator-scripts\") pod \"neutron-78ef-account-create-update-jgbrw\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873611 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zd8z\" (UniqueName: \"kubernetes.io/projected/cfdda2ff-5aea-4055-9c24-10ae333a0681-kube-api-access-9zd8z\") pod \"neutron-78ef-account-create-update-jgbrw\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873629 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-combined-ca-bundle\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873653 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6mz\" (UniqueName: \"kubernetes.io/projected/ad93dbf2-c574-4701-8d8c-14f597593ea1-kube-api-access-mb6mz\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873674 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zrkh\" (UniqueName: \"kubernetes.io/projected/26804f93-99d0-4652-bdc9-10c3283cbd57-kube-api-access-9zrkh\") pod \"neutron-db-create-mcnvn\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.873708 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-config-data\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.880421 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dbf943a-6678-4854-8bba-6d319f22b039-operator-scripts\") pod \"barbican-db-create-dhvq6\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.880478 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfdda2ff-5aea-4055-9c24-10ae333a0681-operator-scripts\") pod \"neutron-78ef-account-create-update-jgbrw\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.889984 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ca7d-account-create-update-zwcn8"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.893031 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.898140 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.937190 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zd8z\" (UniqueName: \"kubernetes.io/projected/cfdda2ff-5aea-4055-9c24-10ae333a0681-kube-api-access-9zd8z\") pod \"neutron-78ef-account-create-update-jgbrw\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.943545 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v5lr\" (UniqueName: \"kubernetes.io/projected/3dbf943a-6678-4854-8bba-6d319f22b039-kube-api-access-9v5lr\") pod \"barbican-db-create-dhvq6\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.954220 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ca7d-account-create-update-zwcn8"] Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.977237 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-combined-ca-bundle\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.977403 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6mz\" (UniqueName: \"kubernetes.io/projected/ad93dbf2-c574-4701-8d8c-14f597593ea1-kube-api-access-mb6mz\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.977448 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zrkh\" (UniqueName: \"kubernetes.io/projected/26804f93-99d0-4652-bdc9-10c3283cbd57-kube-api-access-9zrkh\") pod \"neutron-db-create-mcnvn\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.977508 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/941df155-1f41-44a9-bdf8-80a7e7864a2e-operator-scripts\") pod \"barbican-ca7d-account-create-update-zwcn8\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.977607 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-config-data\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.977776 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2vs\" (UniqueName: \"kubernetes.io/projected/941df155-1f41-44a9-bdf8-80a7e7864a2e-kube-api-access-kt2vs\") pod \"barbican-ca7d-account-create-update-zwcn8\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.978022 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26804f93-99d0-4652-bdc9-10c3283cbd57-operator-scripts\") pod \"neutron-db-create-mcnvn\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.979023 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26804f93-99d0-4652-bdc9-10c3283cbd57-operator-scripts\") pod \"neutron-db-create-mcnvn\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:57 crc kubenswrapper[4624]: I0228 03:53:57.992910 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-config-data\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:57.999311 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-combined-ca-bundle\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.001348 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dhvq6" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.023705 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.027523 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zrkh\" (UniqueName: \"kubernetes.io/projected/26804f93-99d0-4652-bdc9-10c3283cbd57-kube-api-access-9zrkh\") pod \"neutron-db-create-mcnvn\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.051051 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6mz\" (UniqueName: \"kubernetes.io/projected/ad93dbf2-c574-4701-8d8c-14f597593ea1-kube-api-access-mb6mz\") pod \"keystone-db-sync-8xzgn\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.079759 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/941df155-1f41-44a9-bdf8-80a7e7864a2e-operator-scripts\") pod \"barbican-ca7d-account-create-update-zwcn8\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.079868 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt2vs\" (UniqueName: \"kubernetes.io/projected/941df155-1f41-44a9-bdf8-80a7e7864a2e-kube-api-access-kt2vs\") pod \"barbican-ca7d-account-create-update-zwcn8\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.080821 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/941df155-1f41-44a9-bdf8-80a7e7864a2e-operator-scripts\") pod \"barbican-ca7d-account-create-update-zwcn8\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.107495 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt2vs\" (UniqueName: \"kubernetes.io/projected/941df155-1f41-44a9-bdf8-80a7e7864a2e-kube-api-access-kt2vs\") pod \"barbican-ca7d-account-create-update-zwcn8\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.136248 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mcnvn" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.251354 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.348692 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.383661 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b63c-account-create-update-phz2m"] Feb 28 03:53:58 crc kubenswrapper[4624]: W0228 03:53:58.506300 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac85b036_a204_4026_841a_e4d3f91841ef.slice/crio-5d61588b1cdd0473c61cf5837d69dcaed583e31e8a1ca02d059876d090174af8 WatchSource:0}: Error finding container 5d61588b1cdd0473c61cf5837d69dcaed583e31e8a1ca02d059876d090174af8: Status 404 returned error can't find the container with id 5d61588b1cdd0473c61cf5837d69dcaed583e31e8a1ca02d059876d090174af8 Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.544971 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.570482 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xgkww"] Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.646968 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lnsfp"] Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.647265 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerName="dnsmasq-dns" containerID="cri-o://956dd0d948d1393de5e82cbbe3d1fec7b8db9b6929e07b8b407a4fc2afb034be" gracePeriod=10 Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.780843 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dhvq6"] Feb 28 03:53:58 crc kubenswrapper[4624]: I0228 03:53:58.994254 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78ef-account-create-update-jgbrw"] Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.018898 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mcnvn"] Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.081660 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78ef-account-create-update-jgbrw" event={"ID":"cfdda2ff-5aea-4055-9c24-10ae333a0681","Type":"ContainerStarted","Data":"82aa345b144f2cb637266fa0296a74bd638ab268ba138f6d90e38179ff2e937f"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.087546 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dhvq6" event={"ID":"3dbf943a-6678-4854-8bba-6d319f22b039","Type":"ContainerStarted","Data":"db1d9dd760ea9fcc53b414db2fbf588554a9a23988976b737e538146f445edd8"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.087577 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dhvq6" event={"ID":"3dbf943a-6678-4854-8bba-6d319f22b039","Type":"ContainerStarted","Data":"781fd5ff587e9856623ba1516f33f20318650da6c7fef8cfd1a8013ee6f2b1ef"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.091006 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xgkww" event={"ID":"b6787397-0210-4076-90cd-039c9dae6dcb","Type":"ContainerStarted","Data":"283cdc928f65f1a37d0ed11b11f82ca4c86a1c24efef9adf112df1673cd556bd"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.091054 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xgkww" event={"ID":"b6787397-0210-4076-90cd-039c9dae6dcb","Type":"ContainerStarted","Data":"8d3f49ebbadb0614d88d823c9e5780c31c4357a0cfe18b0e01120566b58a88b0"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.098001 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mcnvn" event={"ID":"26804f93-99d0-4652-bdc9-10c3283cbd57","Type":"ContainerStarted","Data":"aabee0c18e8a7a490cd182f584d5beede44be651ad772da46f5078c3c2360b86"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.100440 4624 generic.go:334] "Generic (PLEG): container finished" podID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerID="956dd0d948d1393de5e82cbbe3d1fec7b8db9b6929e07b8b407a4fc2afb034be" exitCode=0 Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.100506 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" event={"ID":"33a20faf-8a53-44bb-880e-09eec7ab14b7","Type":"ContainerDied","Data":"956dd0d948d1393de5e82cbbe3d1fec7b8db9b6929e07b8b407a4fc2afb034be"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.118199 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b63c-account-create-update-phz2m" event={"ID":"ac85b036-a204-4026-841a-e4d3f91841ef","Type":"ContainerStarted","Data":"1fd0de55768ac8f25e420fee148da3d9b4231629c69a3ba5a906ee755cecb368"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.118258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b63c-account-create-update-phz2m" event={"ID":"ac85b036-a204-4026-841a-e4d3f91841ef","Type":"ContainerStarted","Data":"5d61588b1cdd0473c61cf5837d69dcaed583e31e8a1ca02d059876d090174af8"} Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.123951 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-dhvq6" podStartSLOduration=2.12391789 podStartE2EDuration="2.12391789s" podCreationTimestamp="2026-02-28 03:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:59.119555541 +0000 UTC m=+1093.783594850" watchObservedRunningTime="2026-02-28 03:53:59.12391789 +0000 UTC m=+1093.787957199" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.160381 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-xgkww" podStartSLOduration=3.160356888 podStartE2EDuration="3.160356888s" podCreationTimestamp="2026-02-28 03:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:59.147168786 +0000 UTC m=+1093.811208095" watchObservedRunningTime="2026-02-28 03:53:59.160356888 +0000 UTC m=+1093.824396197" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.176558 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b63c-account-create-update-phz2m" podStartSLOduration=2.17654176 podStartE2EDuration="2.17654176s" podCreationTimestamp="2026-02-28 03:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:53:59.172956733 +0000 UTC m=+1093.836996042" watchObservedRunningTime="2026-02-28 03:53:59.17654176 +0000 UTC m=+1093.840581069" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.235493 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.280468 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ca7d-account-create-update-zwcn8"] Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.336129 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-dns-svc\") pod \"33a20faf-8a53-44bb-880e-09eec7ab14b7\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.336295 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-sb\") pod \"33a20faf-8a53-44bb-880e-09eec7ab14b7\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.336384 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp49d\" (UniqueName: \"kubernetes.io/projected/33a20faf-8a53-44bb-880e-09eec7ab14b7-kube-api-access-bp49d\") pod \"33a20faf-8a53-44bb-880e-09eec7ab14b7\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.336412 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-config\") pod \"33a20faf-8a53-44bb-880e-09eec7ab14b7\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.336531 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-nb\") pod \"33a20faf-8a53-44bb-880e-09eec7ab14b7\" (UID: \"33a20faf-8a53-44bb-880e-09eec7ab14b7\") " Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.359559 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a20faf-8a53-44bb-880e-09eec7ab14b7-kube-api-access-bp49d" (OuterVolumeSpecName: "kube-api-access-bp49d") pod "33a20faf-8a53-44bb-880e-09eec7ab14b7" (UID: "33a20faf-8a53-44bb-880e-09eec7ab14b7"). InnerVolumeSpecName "kube-api-access-bp49d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.438562 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp49d\" (UniqueName: \"kubernetes.io/projected/33a20faf-8a53-44bb-880e-09eec7ab14b7-kube-api-access-bp49d\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.444624 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33a20faf-8a53-44bb-880e-09eec7ab14b7" (UID: "33a20faf-8a53-44bb-880e-09eec7ab14b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.483409 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33a20faf-8a53-44bb-880e-09eec7ab14b7" (UID: "33a20faf-8a53-44bb-880e-09eec7ab14b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.487723 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33a20faf-8a53-44bb-880e-09eec7ab14b7" (UID: "33a20faf-8a53-44bb-880e-09eec7ab14b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.497301 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-config" (OuterVolumeSpecName: "config") pod "33a20faf-8a53-44bb-880e-09eec7ab14b7" (UID: "33a20faf-8a53-44bb-880e-09eec7ab14b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.515957 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8xzgn"] Feb 28 03:53:59 crc kubenswrapper[4624]: E0228 03:53:59.527393 4624 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:46828->38.102.83.227:35197: write tcp 38.102.83.227:46828->38.102.83.227:35197: write: broken pipe Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.552004 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.552065 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.552077 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:53:59 crc kubenswrapper[4624]: I0228 03:53:59.552115 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a20faf-8a53-44bb-880e-09eec7ab14b7-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.137443 4624 generic.go:334] "Generic (PLEG): container finished" podID="26804f93-99d0-4652-bdc9-10c3283cbd57" containerID="b7f9fe2ce98825cbbf96567f38a06779ba6c414aaf8cbe52b7e65b73fbf1759d" exitCode=0 Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.137986 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mcnvn" event={"ID":"26804f93-99d0-4652-bdc9-10c3283cbd57","Type":"ContainerDied","Data":"b7f9fe2ce98825cbbf96567f38a06779ba6c414aaf8cbe52b7e65b73fbf1759d"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.143015 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca7d-account-create-update-zwcn8" event={"ID":"941df155-1f41-44a9-bdf8-80a7e7864a2e","Type":"ContainerStarted","Data":"8bea4d2098de4150a4cf2e08f339d2cfcc89f8d343586c83680d6fd75513239a"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.143076 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca7d-account-create-update-zwcn8" event={"ID":"941df155-1f41-44a9-bdf8-80a7e7864a2e","Type":"ContainerStarted","Data":"70442ddc09bcc2bb9777dca6ea6dc3405218189291f5a58398f430b839cf7270"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.145829 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537514-44w2r"] Feb 28 03:54:00 crc kubenswrapper[4624]: E0228 03:54:00.146408 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerName="init" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.146429 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerName="init" Feb 28 03:54:00 crc kubenswrapper[4624]: E0228 03:54:00.146460 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerName="dnsmasq-dns" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.146468 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerName="dnsmasq-dns" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.146636 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" containerName="dnsmasq-dns" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.147239 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.151943 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.152226 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.152459 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.153493 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" event={"ID":"33a20faf-8a53-44bb-880e-09eec7ab14b7","Type":"ContainerDied","Data":"39dce93ae743644359c092be1d71e4e755bcc00ea8bd4fd38528448c976830c4"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.153563 4624 scope.go:117] "RemoveContainer" containerID="956dd0d948d1393de5e82cbbe3d1fec7b8db9b6929e07b8b407a4fc2afb034be" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.153722 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lnsfp" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.164205 4624 generic.go:334] "Generic (PLEG): container finished" podID="ac85b036-a204-4026-841a-e4d3f91841ef" containerID="1fd0de55768ac8f25e420fee148da3d9b4231629c69a3ba5a906ee755cecb368" exitCode=0 Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.164333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b63c-account-create-update-phz2m" event={"ID":"ac85b036-a204-4026-841a-e4d3f91841ef","Type":"ContainerDied","Data":"1fd0de55768ac8f25e420fee148da3d9b4231629c69a3ba5a906ee755cecb368"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.167037 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8xzgn" event={"ID":"ad93dbf2-c574-4701-8d8c-14f597593ea1","Type":"ContainerStarted","Data":"1bfbeeb9c8dc00b3b88ee46a74f0f7b2e44ff2954f2da0012d4a51d113e34b4d"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.170332 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537514-44w2r"] Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.173486 4624 generic.go:334] "Generic (PLEG): container finished" podID="cfdda2ff-5aea-4055-9c24-10ae333a0681" containerID="2a6c019d60640347946bf734a57b95a9fd5451cc4662dd20bae8810144016b35" exitCode=0 Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.173664 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78ef-account-create-update-jgbrw" event={"ID":"cfdda2ff-5aea-4055-9c24-10ae333a0681","Type":"ContainerDied","Data":"2a6c019d60640347946bf734a57b95a9fd5451cc4662dd20bae8810144016b35"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.185017 4624 scope.go:117] "RemoveContainer" containerID="1a2057b2090928112581e495256f6c5c9ef1b6a0883efc0f3ac32e5f1628785e" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.188113 4624 generic.go:334] "Generic (PLEG): container finished" podID="b6787397-0210-4076-90cd-039c9dae6dcb" containerID="283cdc928f65f1a37d0ed11b11f82ca4c86a1c24efef9adf112df1673cd556bd" exitCode=0 Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.188232 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xgkww" event={"ID":"b6787397-0210-4076-90cd-039c9dae6dcb","Type":"ContainerDied","Data":"283cdc928f65f1a37d0ed11b11f82ca4c86a1c24efef9adf112df1673cd556bd"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.190710 4624 generic.go:334] "Generic (PLEG): container finished" podID="3dbf943a-6678-4854-8bba-6d319f22b039" containerID="db1d9dd760ea9fcc53b414db2fbf588554a9a23988976b737e538146f445edd8" exitCode=0 Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.190812 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dhvq6" event={"ID":"3dbf943a-6678-4854-8bba-6d319f22b039","Type":"ContainerDied","Data":"db1d9dd760ea9fcc53b414db2fbf588554a9a23988976b737e538146f445edd8"} Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.236609 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ca7d-account-create-update-zwcn8" podStartSLOduration=3.236582117 podStartE2EDuration="3.236582117s" podCreationTimestamp="2026-02-28 03:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:00.226578514 +0000 UTC m=+1094.890617833" watchObservedRunningTime="2026-02-28 03:54:00.236582117 +0000 UTC m=+1094.900621426" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.273761 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgq7\" (UniqueName: \"kubernetes.io/projected/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e-kube-api-access-7jgq7\") pod \"auto-csr-approver-29537514-44w2r\" (UID: \"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e\") " pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:00 crc kubenswrapper[4624]: I0228 03:54:00.748911 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jgq7\" (UniqueName: \"kubernetes.io/projected/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e-kube-api-access-7jgq7\") pod \"auto-csr-approver-29537514-44w2r\" (UID: \"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e\") " pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.148004 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jgq7\" (UniqueName: \"kubernetes.io/projected/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e-kube-api-access-7jgq7\") pod \"auto-csr-approver-29537514-44w2r\" (UID: \"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e\") " pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.195672 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lnsfp"] Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.214574 4624 generic.go:334] "Generic (PLEG): container finished" podID="941df155-1f41-44a9-bdf8-80a7e7864a2e" containerID="8bea4d2098de4150a4cf2e08f339d2cfcc89f8d343586c83680d6fd75513239a" exitCode=0 Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.214818 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca7d-account-create-update-zwcn8" event={"ID":"941df155-1f41-44a9-bdf8-80a7e7864a2e","Type":"ContainerDied","Data":"8bea4d2098de4150a4cf2e08f339d2cfcc89f8d343586c83680d6fd75513239a"} Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.232899 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lnsfp"] Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.407287 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.736035 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.834036 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thb4\" (UniqueName: \"kubernetes.io/projected/ac85b036-a204-4026-841a-e4d3f91841ef-kube-api-access-4thb4\") pod \"ac85b036-a204-4026-841a-e4d3f91841ef\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.834631 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac85b036-a204-4026-841a-e4d3f91841ef-operator-scripts\") pod \"ac85b036-a204-4026-841a-e4d3f91841ef\" (UID: \"ac85b036-a204-4026-841a-e4d3f91841ef\") " Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.836351 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac85b036-a204-4026-841a-e4d3f91841ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac85b036-a204-4026-841a-e4d3f91841ef" (UID: "ac85b036-a204-4026-841a-e4d3f91841ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.853019 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac85b036-a204-4026-841a-e4d3f91841ef-kube-api-access-4thb4" (OuterVolumeSpecName: "kube-api-access-4thb4") pod "ac85b036-a204-4026-841a-e4d3f91841ef" (UID: "ac85b036-a204-4026-841a-e4d3f91841ef"). InnerVolumeSpecName "kube-api-access-4thb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.920341 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dhvq6" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.936705 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac85b036-a204-4026-841a-e4d3f91841ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.936741 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thb4\" (UniqueName: \"kubernetes.io/projected/ac85b036-a204-4026-841a-e4d3f91841ef-kube-api-access-4thb4\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:01 crc kubenswrapper[4624]: I0228 03:54:01.994008 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xgkww" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.010011 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mcnvn" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.011512 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.037897 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dbf943a-6678-4854-8bba-6d319f22b039-operator-scripts\") pod \"3dbf943a-6678-4854-8bba-6d319f22b039\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.038253 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v5lr\" (UniqueName: \"kubernetes.io/projected/3dbf943a-6678-4854-8bba-6d319f22b039-kube-api-access-9v5lr\") pod \"3dbf943a-6678-4854-8bba-6d319f22b039\" (UID: \"3dbf943a-6678-4854-8bba-6d319f22b039\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.038774 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dbf943a-6678-4854-8bba-6d319f22b039-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dbf943a-6678-4854-8bba-6d319f22b039" (UID: "3dbf943a-6678-4854-8bba-6d319f22b039"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.052497 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbf943a-6678-4854-8bba-6d319f22b039-kube-api-access-9v5lr" (OuterVolumeSpecName: "kube-api-access-9v5lr") pod "3dbf943a-6678-4854-8bba-6d319f22b039" (UID: "3dbf943a-6678-4854-8bba-6d319f22b039"). InnerVolumeSpecName "kube-api-access-9v5lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.124682 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a20faf-8a53-44bb-880e-09eec7ab14b7" path="/var/lib/kubelet/pods/33a20faf-8a53-44bb-880e-09eec7ab14b7/volumes" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.139992 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfdda2ff-5aea-4055-9c24-10ae333a0681-operator-scripts\") pod \"cfdda2ff-5aea-4055-9c24-10ae333a0681\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140121 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zd8z\" (UniqueName: \"kubernetes.io/projected/cfdda2ff-5aea-4055-9c24-10ae333a0681-kube-api-access-9zd8z\") pod \"cfdda2ff-5aea-4055-9c24-10ae333a0681\" (UID: \"cfdda2ff-5aea-4055-9c24-10ae333a0681\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140198 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26804f93-99d0-4652-bdc9-10c3283cbd57-operator-scripts\") pod \"26804f93-99d0-4652-bdc9-10c3283cbd57\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140249 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8k2g\" (UniqueName: \"kubernetes.io/projected/b6787397-0210-4076-90cd-039c9dae6dcb-kube-api-access-n8k2g\") pod \"b6787397-0210-4076-90cd-039c9dae6dcb\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140295 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6787397-0210-4076-90cd-039c9dae6dcb-operator-scripts\") pod \"b6787397-0210-4076-90cd-039c9dae6dcb\" (UID: \"b6787397-0210-4076-90cd-039c9dae6dcb\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140322 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zrkh\" (UniqueName: \"kubernetes.io/projected/26804f93-99d0-4652-bdc9-10c3283cbd57-kube-api-access-9zrkh\") pod \"26804f93-99d0-4652-bdc9-10c3283cbd57\" (UID: \"26804f93-99d0-4652-bdc9-10c3283cbd57\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140568 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfdda2ff-5aea-4055-9c24-10ae333a0681-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfdda2ff-5aea-4055-9c24-10ae333a0681" (UID: "cfdda2ff-5aea-4055-9c24-10ae333a0681"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140974 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dbf943a-6678-4854-8bba-6d319f22b039-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.140997 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v5lr\" (UniqueName: \"kubernetes.io/projected/3dbf943a-6678-4854-8bba-6d319f22b039-kube-api-access-9v5lr\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.141011 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfdda2ff-5aea-4055-9c24-10ae333a0681-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.141460 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6787397-0210-4076-90cd-039c9dae6dcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6787397-0210-4076-90cd-039c9dae6dcb" (UID: "b6787397-0210-4076-90cd-039c9dae6dcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.141588 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26804f93-99d0-4652-bdc9-10c3283cbd57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26804f93-99d0-4652-bdc9-10c3283cbd57" (UID: "26804f93-99d0-4652-bdc9-10c3283cbd57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.144267 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdda2ff-5aea-4055-9c24-10ae333a0681-kube-api-access-9zd8z" (OuterVolumeSpecName: "kube-api-access-9zd8z") pod "cfdda2ff-5aea-4055-9c24-10ae333a0681" (UID: "cfdda2ff-5aea-4055-9c24-10ae333a0681"). InnerVolumeSpecName "kube-api-access-9zd8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.144391 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26804f93-99d0-4652-bdc9-10c3283cbd57-kube-api-access-9zrkh" (OuterVolumeSpecName: "kube-api-access-9zrkh") pod "26804f93-99d0-4652-bdc9-10c3283cbd57" (UID: "26804f93-99d0-4652-bdc9-10c3283cbd57"). InnerVolumeSpecName "kube-api-access-9zrkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.147974 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6787397-0210-4076-90cd-039c9dae6dcb-kube-api-access-n8k2g" (OuterVolumeSpecName: "kube-api-access-n8k2g") pod "b6787397-0210-4076-90cd-039c9dae6dcb" (UID: "b6787397-0210-4076-90cd-039c9dae6dcb"). InnerVolumeSpecName "kube-api-access-n8k2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.231797 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dhvq6" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.235896 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dhvq6" event={"ID":"3dbf943a-6678-4854-8bba-6d319f22b039","Type":"ContainerDied","Data":"781fd5ff587e9856623ba1516f33f20318650da6c7fef8cfd1a8013ee6f2b1ef"} Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.235964 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781fd5ff587e9856623ba1516f33f20318650da6c7fef8cfd1a8013ee6f2b1ef" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.250138 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xgkww" event={"ID":"b6787397-0210-4076-90cd-039c9dae6dcb","Type":"ContainerDied","Data":"8d3f49ebbadb0614d88d823c9e5780c31c4357a0cfe18b0e01120566b58a88b0"} Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.250185 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d3f49ebbadb0614d88d823c9e5780c31c4357a0cfe18b0e01120566b58a88b0" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.250268 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xgkww" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.251278 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zd8z\" (UniqueName: \"kubernetes.io/projected/cfdda2ff-5aea-4055-9c24-10ae333a0681-kube-api-access-9zd8z\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.251559 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26804f93-99d0-4652-bdc9-10c3283cbd57-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.251593 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8k2g\" (UniqueName: \"kubernetes.io/projected/b6787397-0210-4076-90cd-039c9dae6dcb-kube-api-access-n8k2g\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.251605 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6787397-0210-4076-90cd-039c9dae6dcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.251616 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zrkh\" (UniqueName: \"kubernetes.io/projected/26804f93-99d0-4652-bdc9-10c3283cbd57-kube-api-access-9zrkh\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.257225 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mcnvn" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.258272 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mcnvn" event={"ID":"26804f93-99d0-4652-bdc9-10c3283cbd57","Type":"ContainerDied","Data":"aabee0c18e8a7a490cd182f584d5beede44be651ad772da46f5078c3c2360b86"} Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.258320 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aabee0c18e8a7a490cd182f584d5beede44be651ad772da46f5078c3c2360b86" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.262188 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b63c-account-create-update-phz2m" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.262221 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b63c-account-create-update-phz2m" event={"ID":"ac85b036-a204-4026-841a-e4d3f91841ef","Type":"ContainerDied","Data":"5d61588b1cdd0473c61cf5837d69dcaed583e31e8a1ca02d059876d090174af8"} Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.264280 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d61588b1cdd0473c61cf5837d69dcaed583e31e8a1ca02d059876d090174af8" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.276247 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78ef-account-create-update-jgbrw" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.276534 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78ef-account-create-update-jgbrw" event={"ID":"cfdda2ff-5aea-4055-9c24-10ae333a0681","Type":"ContainerDied","Data":"82aa345b144f2cb637266fa0296a74bd638ab268ba138f6d90e38179ff2e937f"} Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.276675 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82aa345b144f2cb637266fa0296a74bd638ab268ba138f6d90e38179ff2e937f" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.313934 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537514-44w2r"] Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.640740 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.763893 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/941df155-1f41-44a9-bdf8-80a7e7864a2e-operator-scripts\") pod \"941df155-1f41-44a9-bdf8-80a7e7864a2e\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.764243 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt2vs\" (UniqueName: \"kubernetes.io/projected/941df155-1f41-44a9-bdf8-80a7e7864a2e-kube-api-access-kt2vs\") pod \"941df155-1f41-44a9-bdf8-80a7e7864a2e\" (UID: \"941df155-1f41-44a9-bdf8-80a7e7864a2e\") " Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.766949 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941df155-1f41-44a9-bdf8-80a7e7864a2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "941df155-1f41-44a9-bdf8-80a7e7864a2e" (UID: "941df155-1f41-44a9-bdf8-80a7e7864a2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.773600 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941df155-1f41-44a9-bdf8-80a7e7864a2e-kube-api-access-kt2vs" (OuterVolumeSpecName: "kube-api-access-kt2vs") pod "941df155-1f41-44a9-bdf8-80a7e7864a2e" (UID: "941df155-1f41-44a9-bdf8-80a7e7864a2e"). InnerVolumeSpecName "kube-api-access-kt2vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.867068 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt2vs\" (UniqueName: \"kubernetes.io/projected/941df155-1f41-44a9-bdf8-80a7e7864a2e-kube-api-access-kt2vs\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:02 crc kubenswrapper[4624]: I0228 03:54:02.867128 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/941df155-1f41-44a9-bdf8-80a7e7864a2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:03 crc kubenswrapper[4624]: I0228 03:54:03.289451 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537514-44w2r" event={"ID":"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e","Type":"ContainerStarted","Data":"c96b478d3c68121efd746d0227a88c62643f0b106d37a75b298dbe592644630c"} Feb 28 03:54:03 crc kubenswrapper[4624]: I0228 03:54:03.292306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ca7d-account-create-update-zwcn8" event={"ID":"941df155-1f41-44a9-bdf8-80a7e7864a2e","Type":"ContainerDied","Data":"70442ddc09bcc2bb9777dca6ea6dc3405218189291f5a58398f430b839cf7270"} Feb 28 03:54:03 crc kubenswrapper[4624]: I0228 03:54:03.292333 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70442ddc09bcc2bb9777dca6ea6dc3405218189291f5a58398f430b839cf7270" Feb 28 03:54:03 crc kubenswrapper[4624]: I0228 03:54:03.292420 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ca7d-account-create-update-zwcn8" Feb 28 03:54:04 crc kubenswrapper[4624]: I0228 03:54:04.307800 4624 generic.go:334] "Generic (PLEG): container finished" podID="2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e" containerID="1cc0fca455bd32cd8d0ba414a92b4450c6254c4d8ce0c962d131629f93437410" exitCode=0 Feb 28 03:54:04 crc kubenswrapper[4624]: I0228 03:54:04.308226 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537514-44w2r" event={"ID":"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e","Type":"ContainerDied","Data":"1cc0fca455bd32cd8d0ba414a92b4450c6254c4d8ce0c962d131629f93437410"} Feb 28 03:54:06 crc kubenswrapper[4624]: I0228 03:54:06.488799 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:06 crc kubenswrapper[4624]: I0228 03:54:06.649711 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jgq7\" (UniqueName: \"kubernetes.io/projected/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e-kube-api-access-7jgq7\") pod \"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e\" (UID: \"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e\") " Feb 28 03:54:06 crc kubenswrapper[4624]: I0228 03:54:06.656584 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e-kube-api-access-7jgq7" (OuterVolumeSpecName: "kube-api-access-7jgq7") pod "2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e" (UID: "2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e"). InnerVolumeSpecName "kube-api-access-7jgq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:06 crc kubenswrapper[4624]: I0228 03:54:06.752696 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jgq7\" (UniqueName: \"kubernetes.io/projected/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e-kube-api-access-7jgq7\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.334876 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmxqw" event={"ID":"e987d56b-dcae-4f73-8e96-9010674f3c4e","Type":"ContainerStarted","Data":"f6ce4a2c70a6813aff13e07dc29f496b6b87405812f7cf36e2d1098ff78481c3"} Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.338106 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537514-44w2r" Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.339214 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537514-44w2r" event={"ID":"2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e","Type":"ContainerDied","Data":"c96b478d3c68121efd746d0227a88c62643f0b106d37a75b298dbe592644630c"} Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.339268 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96b478d3c68121efd746d0227a88c62643f0b106d37a75b298dbe592644630c" Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.343370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8xzgn" event={"ID":"ad93dbf2-c574-4701-8d8c-14f597593ea1","Type":"ContainerStarted","Data":"e97c5c18ff52b1797a63167ae6ad7db176c84b98624dbb75c84232636f181179"} Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.377972 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jmxqw" podStartSLOduration=2.431886484 podStartE2EDuration="39.377931451s" podCreationTimestamp="2026-02-28 03:53:28 +0000 UTC" firstStartedPulling="2026-02-28 03:53:29.555545937 +0000 UTC m=+1064.219585246" lastFinishedPulling="2026-02-28 03:54:06.501590904 +0000 UTC m=+1101.165630213" observedRunningTime="2026-02-28 03:54:07.376057599 +0000 UTC m=+1102.040096918" watchObservedRunningTime="2026-02-28 03:54:07.377931451 +0000 UTC m=+1102.041970770" Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.418693 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8xzgn" podStartSLOduration=3.4971250830000002 podStartE2EDuration="10.418662876s" podCreationTimestamp="2026-02-28 03:53:57 +0000 UTC" firstStartedPulling="2026-02-28 03:53:59.606940167 +0000 UTC m=+1094.270979476" lastFinishedPulling="2026-02-28 03:54:06.52847796 +0000 UTC m=+1101.192517269" observedRunningTime="2026-02-28 03:54:07.415627683 +0000 UTC m=+1102.079666982" watchObservedRunningTime="2026-02-28 03:54:07.418662876 +0000 UTC m=+1102.082702195" Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.570714 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537508-srmfl"] Feb 28 03:54:07 crc kubenswrapper[4624]: I0228 03:54:07.588657 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537508-srmfl"] Feb 28 03:54:08 crc kubenswrapper[4624]: I0228 03:54:08.098489 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcfa6f52-194d-484f-8e89-4dc2bafc8a34" path="/var/lib/kubelet/pods/bcfa6f52-194d-484f-8e89-4dc2bafc8a34/volumes" Feb 28 03:54:11 crc kubenswrapper[4624]: I0228 03:54:11.391403 4624 generic.go:334] "Generic (PLEG): container finished" podID="ad93dbf2-c574-4701-8d8c-14f597593ea1" containerID="e97c5c18ff52b1797a63167ae6ad7db176c84b98624dbb75c84232636f181179" exitCode=0 Feb 28 03:54:11 crc kubenswrapper[4624]: I0228 03:54:11.391539 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8xzgn" event={"ID":"ad93dbf2-c574-4701-8d8c-14f597593ea1","Type":"ContainerDied","Data":"e97c5c18ff52b1797a63167ae6ad7db176c84b98624dbb75c84232636f181179"} Feb 28 03:54:12 crc kubenswrapper[4624]: I0228 03:54:12.823102 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:54:12 crc kubenswrapper[4624]: I0228 03:54:12.977301 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-config-data\") pod \"ad93dbf2-c574-4701-8d8c-14f597593ea1\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " Feb 28 03:54:12 crc kubenswrapper[4624]: I0228 03:54:12.977422 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6mz\" (UniqueName: \"kubernetes.io/projected/ad93dbf2-c574-4701-8d8c-14f597593ea1-kube-api-access-mb6mz\") pod \"ad93dbf2-c574-4701-8d8c-14f597593ea1\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " Feb 28 03:54:12 crc kubenswrapper[4624]: I0228 03:54:12.977444 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-combined-ca-bundle\") pod \"ad93dbf2-c574-4701-8d8c-14f597593ea1\" (UID: \"ad93dbf2-c574-4701-8d8c-14f597593ea1\") " Feb 28 03:54:12 crc kubenswrapper[4624]: I0228 03:54:12.986540 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad93dbf2-c574-4701-8d8c-14f597593ea1-kube-api-access-mb6mz" (OuterVolumeSpecName: "kube-api-access-mb6mz") pod "ad93dbf2-c574-4701-8d8c-14f597593ea1" (UID: "ad93dbf2-c574-4701-8d8c-14f597593ea1"). InnerVolumeSpecName "kube-api-access-mb6mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.008443 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad93dbf2-c574-4701-8d8c-14f597593ea1" (UID: "ad93dbf2-c574-4701-8d8c-14f597593ea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.031805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-config-data" (OuterVolumeSpecName: "config-data") pod "ad93dbf2-c574-4701-8d8c-14f597593ea1" (UID: "ad93dbf2-c574-4701-8d8c-14f597593ea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.079860 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6mz\" (UniqueName: \"kubernetes.io/projected/ad93dbf2-c574-4701-8d8c-14f597593ea1-kube-api-access-mb6mz\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.079900 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.079910 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad93dbf2-c574-4701-8d8c-14f597593ea1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.416827 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8xzgn" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.416732 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8xzgn" event={"ID":"ad93dbf2-c574-4701-8d8c-14f597593ea1","Type":"ContainerDied","Data":"1bfbeeb9c8dc00b3b88ee46a74f0f7b2e44ff2954f2da0012d4a51d113e34b4d"} Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.419232 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfbeeb9c8dc00b3b88ee46a74f0f7b2e44ff2954f2da0012d4a51d113e34b4d" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.650693 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d5h4h"] Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.652189 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e" containerName="oc" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.653021 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e" containerName="oc" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.653136 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac85b036-a204-4026-841a-e4d3f91841ef" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.653237 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac85b036-a204-4026-841a-e4d3f91841ef" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.653328 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad93dbf2-c574-4701-8d8c-14f597593ea1" containerName="keystone-db-sync" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.653398 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad93dbf2-c574-4701-8d8c-14f597593ea1" containerName="keystone-db-sync" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.653472 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbf943a-6678-4854-8bba-6d319f22b039" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.653541 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbf943a-6678-4854-8bba-6d319f22b039" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.653613 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26804f93-99d0-4652-bdc9-10c3283cbd57" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.653684 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="26804f93-99d0-4652-bdc9-10c3283cbd57" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.653775 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941df155-1f41-44a9-bdf8-80a7e7864a2e" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.653847 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="941df155-1f41-44a9-bdf8-80a7e7864a2e" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.653941 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6787397-0210-4076-90cd-039c9dae6dcb" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.654008 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6787397-0210-4076-90cd-039c9dae6dcb" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: E0228 03:54:13.654097 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdda2ff-5aea-4055-9c24-10ae333a0681" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.654184 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdda2ff-5aea-4055-9c24-10ae333a0681" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.654554 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad93dbf2-c574-4701-8d8c-14f597593ea1" containerName="keystone-db-sync" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.654646 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="941df155-1f41-44a9-bdf8-80a7e7864a2e" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.654738 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac85b036-a204-4026-841a-e4d3f91841ef" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.655480 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbf943a-6678-4854-8bba-6d319f22b039" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.655588 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e" containerName="oc" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.655669 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6787397-0210-4076-90cd-039c9dae6dcb" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.655754 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdda2ff-5aea-4055-9c24-10ae333a0681" containerName="mariadb-account-create-update" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.655831 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="26804f93-99d0-4652-bdc9-10c3283cbd57" containerName="mariadb-database-create" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.659914 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.663831 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-lw8ph"] Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.665179 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.668330 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.668482 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.668576 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.668726 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bkrfd" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.673651 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.705449 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d5h4h"] Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.724980 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-lw8ph"] Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794535 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794588 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-fernet-keys\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794624 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-credential-keys\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794645 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdkg6\" (UniqueName: \"kubernetes.io/projected/397266b0-e2fd-4735-b799-d9c15c78a2cc-kube-api-access-zdkg6\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794664 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794693 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-scripts\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794733 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-config-data\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794763 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-combined-ca-bundle\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794787 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794808 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.794863 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28vp\" (UniqueName: \"kubernetes.io/projected/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-kube-api-access-w28vp\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.795003 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-config\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.897872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-fernet-keys\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.898871 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-credential-keys\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.898959 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdkg6\" (UniqueName: \"kubernetes.io/projected/397266b0-e2fd-4735-b799-d9c15c78a2cc-kube-api-access-zdkg6\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.899028 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.899163 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-scripts\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.899271 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-config-data\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.899362 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-combined-ca-bundle\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.910733 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.910833 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.910926 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28vp\" (UniqueName: \"kubernetes.io/projected/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-kube-api-access-w28vp\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.911095 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-config\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.911211 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.911549 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.900281 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.908114 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-credential-keys\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.910949 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-fernet-keys\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.912369 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.912936 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.913187 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-combined-ca-bundle\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.921504 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-scripts\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.927012 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-config-data\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.927857 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-config\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.951160 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdkg6\" (UniqueName: \"kubernetes.io/projected/397266b0-e2fd-4735-b799-d9c15c78a2cc-kube-api-access-zdkg6\") pod \"keystone-bootstrap-d5h4h\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.958745 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28vp\" (UniqueName: \"kubernetes.io/projected/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-kube-api-access-w28vp\") pod \"dnsmasq-dns-6f8c45789f-lw8ph\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.995316 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c5cbd76fc-29hwp"] Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.996871 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:13 crc kubenswrapper[4624]: I0228 03:54:13.997793 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:13.999005 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.001100 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-r9b5f" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.012671 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.012960 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.017196 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.043030 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-scripts\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.060259 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-horizon-secret-key\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.060827 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7sw\" (UniqueName: \"kubernetes.io/projected/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-kube-api-access-zt7sw\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.061580 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-config-data\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.061716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-logs\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.092345 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5cbd76fc-29hwp"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.230449 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-scripts\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.230910 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-horizon-secret-key\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.231020 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7sw\" (UniqueName: \"kubernetes.io/projected/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-kube-api-access-zt7sw\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.231197 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-config-data\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.231294 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-logs\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.234019 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-scripts\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.243205 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-config-data\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.253944 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-logs\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.256702 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-horizon-secret-key\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.288711 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7sw\" (UniqueName: \"kubernetes.io/projected/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-kube-api-access-zt7sw\") pod \"horizon-6c5cbd76fc-29hwp\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.304466 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vrwnn"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.347610 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.365812 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.366163 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8tlzl"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.369922 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.370544 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9xwxm" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.383596 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.418149 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7zvf6"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.419536 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.450628 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-config\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.450724 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25b5\" (UniqueName: \"kubernetes.io/projected/fb1928c8-43f4-46a7-997e-baa034bb94d8-kube-api-access-d25b5\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.450759 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-combined-ca-bundle\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.451420 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.451682 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fn79l" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.451879 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.452161 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.452278 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k6ls8" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.452400 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.459256 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vrwnn"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.485738 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.495381 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8tlzl"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.530349 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-lw8ph"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.550428 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.552717 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.556373 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtx6c\" (UniqueName: \"kubernetes.io/projected/0d169c37-cd26-4e66-8f96-d0a53a96d616-kube-api-access-rtx6c\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.564991 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc81234-8d71-4da8-821f-62f79823de92-logs\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.565552 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-config-data\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.565648 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-combined-ca-bundle\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.565965 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5vg\" (UniqueName: \"kubernetes.io/projected/cdc81234-8d71-4da8-821f-62f79823de92-kube-api-access-8f5vg\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.566548 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-scripts\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.566662 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-db-sync-config-data\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.567119 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-config\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.567274 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d169c37-cd26-4e66-8f96-d0a53a96d616-etc-machine-id\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.567385 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-combined-ca-bundle\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.571861 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25b5\" (UniqueName: \"kubernetes.io/projected/fb1928c8-43f4-46a7-997e-baa034bb94d8-kube-api-access-d25b5\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.572623 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-combined-ca-bundle\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.572822 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-config-data\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.573010 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-scripts\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.588351 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.589393 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-config\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.591150 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7zvf6"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.599031 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.607323 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-combined-ca-bundle\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.650531 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.671276 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bs8v6"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.673411 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680429 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d169c37-cd26-4e66-8f96-d0a53a96d616-etc-machine-id\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680476 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-combined-ca-bundle\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680517 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-scripts\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680559 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-config-data\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680582 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-run-httpd\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680596 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-config-data\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680636 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-scripts\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680675 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtx6c\" (UniqueName: \"kubernetes.io/projected/0d169c37-cd26-4e66-8f96-d0a53a96d616-kube-api-access-rtx6c\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680697 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc81234-8d71-4da8-821f-62f79823de92-logs\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680720 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-config-data\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhqm\" (UniqueName: \"kubernetes.io/projected/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-kube-api-access-2fhqm\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680755 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-combined-ca-bundle\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680783 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5vg\" (UniqueName: \"kubernetes.io/projected/cdc81234-8d71-4da8-821f-62f79823de92-kube-api-access-8f5vg\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680802 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-scripts\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680826 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-db-sync-config-data\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680854 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680871 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.680909 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-log-httpd\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.681011 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d169c37-cd26-4e66-8f96-d0a53a96d616-etc-machine-id\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.685455 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25b5\" (UniqueName: \"kubernetes.io/projected/fb1928c8-43f4-46a7-997e-baa034bb94d8-kube-api-access-d25b5\") pod \"neutron-db-sync-vrwnn\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.697723 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-scripts\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.716275 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc81234-8d71-4da8-821f-62f79823de92-logs\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.718012 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-scripts\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.719879 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-combined-ca-bundle\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.724831 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-combined-ca-bundle\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.725384 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.747562 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-config-data\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.748567 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-config-data\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.749733 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-db-sync-config-data\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.751190 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bs8v6"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.775176 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78cb4b6465-msstz"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.777066 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.785826 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.786219 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fhqm\" (UniqueName: \"kubernetes.io/projected/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-kube-api-access-2fhqm\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.786400 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.786557 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.786673 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.786807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-log-httpd\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.786914 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-config\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.790537 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-scripts\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.790811 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-run-httpd\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.790930 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-config-data\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.791040 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.809013 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltl48\" (UniqueName: \"kubernetes.io/projected/df7cc5ba-b521-4349-8306-35d633072cef-kube-api-access-ltl48\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.793657 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-run-httpd\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.807800 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.808931 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78cb4b6465-msstz"] Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.794594 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-log-httpd\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.812156 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.816794 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-scripts\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.817706 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-config-data\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.835099 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5vg\" (UniqueName: \"kubernetes.io/projected/cdc81234-8d71-4da8-821f-62f79823de92-kube-api-access-8f5vg\") pod \"placement-db-sync-7zvf6\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.835783 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fhqm\" (UniqueName: \"kubernetes.io/projected/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-kube-api-access-2fhqm\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.840996 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtx6c\" (UniqueName: \"kubernetes.io/projected/0d169c37-cd26-4e66-8f96-d0a53a96d616-kube-api-access-rtx6c\") pod \"cinder-db-sync-8tlzl\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.857263 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.922189 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974407 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-config-data\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974579 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-config\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974606 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htrv\" (UniqueName: \"kubernetes.io/projected/69bc742a-a80d-43f4-90cc-993a14f7dbd5-kube-api-access-5htrv\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974771 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974806 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltl48\" (UniqueName: \"kubernetes.io/projected/df7cc5ba-b521-4349-8306-35d633072cef-kube-api-access-ltl48\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974842 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc742a-a80d-43f4-90cc-993a14f7dbd5-logs\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974874 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/69bc742a-a80d-43f4-90cc-993a14f7dbd5-horizon-secret-key\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974907 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-scripts\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.974957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.975011 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.976097 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.979664 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.980389 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-config\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.981418 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:14 crc kubenswrapper[4624]: I0228 03:54:14.999412 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l9l75"] Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.000733 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.006224 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-29fpl" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.012520 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.013521 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.077597 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltl48\" (UniqueName: \"kubernetes.io/projected/df7cc5ba-b521-4349-8306-35d633072cef-kube-api-access-ltl48\") pod \"dnsmasq-dns-fcfdd6f9f-bs8v6\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.090997 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-config-data\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.091176 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htrv\" (UniqueName: \"kubernetes.io/projected/69bc742a-a80d-43f4-90cc-993a14f7dbd5-kube-api-access-5htrv\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.091256 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc742a-a80d-43f4-90cc-993a14f7dbd5-logs\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.091286 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/69bc742a-a80d-43f4-90cc-993a14f7dbd5-horizon-secret-key\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.091313 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-scripts\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.092394 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-config-data\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.095578 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-scripts\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.095852 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc742a-a80d-43f4-90cc-993a14f7dbd5-logs\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.103427 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/69bc742a-a80d-43f4-90cc-993a14f7dbd5-horizon-secret-key\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.114760 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.117003 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l9l75"] Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.130586 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7zvf6" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.146029 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htrv\" (UniqueName: \"kubernetes.io/projected/69bc742a-a80d-43f4-90cc-993a14f7dbd5-kube-api-access-5htrv\") pod \"horizon-78cb4b6465-msstz\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.195382 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-combined-ca-bundle\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.195581 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lkzm\" (UniqueName: \"kubernetes.io/projected/34b0ae2c-3cbb-419b-8214-739eea04c9a4-kube-api-access-4lkzm\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.195609 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-db-sync-config-data\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.348814 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lkzm\" (UniqueName: \"kubernetes.io/projected/34b0ae2c-3cbb-419b-8214-739eea04c9a4-kube-api-access-4lkzm\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.349102 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-db-sync-config-data\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.349260 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-combined-ca-bundle\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.361060 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.409062 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lkzm\" (UniqueName: \"kubernetes.io/projected/34b0ae2c-3cbb-419b-8214-739eea04c9a4-kube-api-access-4lkzm\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.409487 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-combined-ca-bundle\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.426582 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.426844 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-db-sync-config-data\") pod \"barbican-db-sync-l9l75\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.486970 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-lw8ph"] Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.583816 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9l75" Feb 28 03:54:15 crc kubenswrapper[4624]: I0228 03:54:15.901570 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d5h4h"] Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.423584 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5cbd76fc-29hwp"] Feb 28 03:54:16 crc kubenswrapper[4624]: W0228 03:54:16.435963 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4468dd90_bd07_4cdf_8fc8_de0dcfff1c4b.slice/crio-aec2d609b63609b4f139d51ecba81744fdb962692cfd2ee8f87912753535e322 WatchSource:0}: Error finding container aec2d609b63609b4f139d51ecba81744fdb962692cfd2ee8f87912753535e322: Status 404 returned error can't find the container with id aec2d609b63609b4f139d51ecba81744fdb962692cfd2ee8f87912753535e322 Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.466069 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vrwnn"] Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.491177 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bs8v6"] Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.537226 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5h4h" event={"ID":"397266b0-e2fd-4735-b799-d9c15c78a2cc","Type":"ContainerStarted","Data":"2eddc4df89acc270d9d8a7160b4029944058c3e59c39dbb6367ad09756366451"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.537672 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5h4h" event={"ID":"397266b0-e2fd-4735-b799-d9c15c78a2cc","Type":"ContainerStarted","Data":"baf721eb94f351a1752d1831c00886066d7800bb18689c82dd67a51f11031cc5"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.555948 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrwnn" event={"ID":"fb1928c8-43f4-46a7-997e-baa034bb94d8","Type":"ContainerStarted","Data":"5472dbb9612b80412c76a09fc5381efc6548ad0f1303036c9424c02c47a5ebcf"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.592685 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78cb4b6465-msstz"] Feb 28 03:54:16 crc kubenswrapper[4624]: W0228 03:54:16.593405 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdc81234_8d71_4da8_821f_62f79823de92.slice/crio-8f97efadfc4d61c35deb399bedc5abfabba8c9dd826a62159ac090b623306789 WatchSource:0}: Error finding container 8f97efadfc4d61c35deb399bedc5abfabba8c9dd826a62159ac090b623306789: Status 404 returned error can't find the container with id 8f97efadfc4d61c35deb399bedc5abfabba8c9dd826a62159ac090b623306789 Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.601306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5cbd76fc-29hwp" event={"ID":"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b","Type":"ContainerStarted","Data":"aec2d609b63609b4f139d51ecba81744fdb962692cfd2ee8f87912753535e322"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.615469 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" containerID="c87607d8d5a545a1ec9417a8e4b1fe494ba7c05c7098ab23f601427f62da7314" exitCode=0 Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.615544 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" event={"ID":"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4","Type":"ContainerDied","Data":"c87607d8d5a545a1ec9417a8e4b1fe494ba7c05c7098ab23f601427f62da7314"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.615584 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" event={"ID":"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4","Type":"ContainerStarted","Data":"0e96c5cf95c0b0938092f9345932e3f99e60dbe0c0b10fa19b4df2ec8441a15c"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.622738 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7zvf6"] Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.644412 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8tlzl"] Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.649796 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" event={"ID":"df7cc5ba-b521-4349-8306-35d633072cef","Type":"ContainerStarted","Data":"1bf6813312b05e1c3b09708625f9dd9435374cd8dd38ec2b1f7f96cc67660596"} Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.683734 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d5h4h" podStartSLOduration=3.683701062 podStartE2EDuration="3.683701062s" podCreationTimestamp="2026-02-28 03:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:16.565372222 +0000 UTC m=+1111.229411531" watchObservedRunningTime="2026-02-28 03:54:16.683701062 +0000 UTC m=+1111.347740371" Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.701471 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l9l75"] Feb 28 03:54:16 crc kubenswrapper[4624]: I0228 03:54:16.723137 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.191586 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.313611 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-sb\") pod \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.313782 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w28vp\" (UniqueName: \"kubernetes.io/projected/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-kube-api-access-w28vp\") pod \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.314141 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-config\") pod \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.314286 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-swift-storage-0\") pod \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.314584 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-nb\") pod \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.314748 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-svc\") pod \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\" (UID: \"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4\") " Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.400752 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-kube-api-access-w28vp" (OuterVolumeSpecName: "kube-api-access-w28vp") pod "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" (UID: "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4"). InnerVolumeSpecName "kube-api-access-w28vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.430192 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w28vp\" (UniqueName: \"kubernetes.io/projected/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-kube-api-access-w28vp\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.528449 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" (UID: "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.535341 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.539273 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-config" (OuterVolumeSpecName: "config") pod "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" (UID: "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.546306 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" (UID: "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.550739 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" (UID: "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.558471 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5cbd76fc-29hwp"] Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.585892 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" (UID: "9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.588056 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6697794ff-9xhd2"] Feb 28 03:54:17 crc kubenswrapper[4624]: E0228 03:54:17.588792 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" containerName="init" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.589006 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" containerName="init" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.589294 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" containerName="init" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.595294 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.603119 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6697794ff-9xhd2"] Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.609162 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.638028 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.638071 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.638166 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.638176 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.676375 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87","Type":"ContainerStarted","Data":"95d452a624edafd2bd122f41fd5aa727cd6e26c18fe907864c317e8c55a96864"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.690709 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" event={"ID":"9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4","Type":"ContainerDied","Data":"0e96c5cf95c0b0938092f9345932e3f99e60dbe0c0b10fa19b4df2ec8441a15c"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.690735 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-lw8ph" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.690866 4624 scope.go:117] "RemoveContainer" containerID="c87607d8d5a545a1ec9417a8e4b1fe494ba7c05c7098ab23f601427f62da7314" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.697553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9l75" event={"ID":"34b0ae2c-3cbb-419b-8214-739eea04c9a4","Type":"ContainerStarted","Data":"9cd6e91d9ee0a9ca2db41d7dc8daaafd7a07c95b54e085bcc5cecb773ca36562"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.714950 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb4b6465-msstz" event={"ID":"69bc742a-a80d-43f4-90cc-993a14f7dbd5","Type":"ContainerStarted","Data":"730e233069e061c1ca7dbb270746bbe2e9aecab01a326ca2e4236a7a75aace91"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.718743 4624 generic.go:334] "Generic (PLEG): container finished" podID="df7cc5ba-b521-4349-8306-35d633072cef" containerID="6664ee977520eea8a8bc1362c2e38e1d2af422afada72900b1ed649773cdbc9f" exitCode=0 Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.718794 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" event={"ID":"df7cc5ba-b521-4349-8306-35d633072cef","Type":"ContainerDied","Data":"6664ee977520eea8a8bc1362c2e38e1d2af422afada72900b1ed649773cdbc9f"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.721464 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrwnn" event={"ID":"fb1928c8-43f4-46a7-997e-baa034bb94d8","Type":"ContainerStarted","Data":"44dd7334941203acec54576509f8450f7c271f99fc620caa6798bd650cbdecac"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.724409 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7zvf6" event={"ID":"cdc81234-8d71-4da8-821f-62f79823de92","Type":"ContainerStarted","Data":"8f97efadfc4d61c35deb399bedc5abfabba8c9dd826a62159ac090b623306789"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.726895 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8tlzl" event={"ID":"0d169c37-cd26-4e66-8f96-d0a53a96d616","Type":"ContainerStarted","Data":"ee223a4da91292046167a5001d561429aa9460697391d4a371893af1cc26b2aa"} Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.741036 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d192312c-1396-4c6c-a687-b4ddfe356ded-horizon-secret-key\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.741201 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192312c-1396-4c6c-a687-b4ddfe356ded-logs\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.741240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-config-data\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.741304 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-scripts\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.742582 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqx5s\" (UniqueName: \"kubernetes.io/projected/d192312c-1396-4c6c-a687-b4ddfe356ded-kube-api-access-tqx5s\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.811369 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-lw8ph"] Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.842622 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-lw8ph"] Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.846899 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-config-data\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.848313 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-scripts\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.848540 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqx5s\" (UniqueName: \"kubernetes.io/projected/d192312c-1396-4c6c-a687-b4ddfe356ded-kube-api-access-tqx5s\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.848768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d192312c-1396-4c6c-a687-b4ddfe356ded-horizon-secret-key\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.849106 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192312c-1396-4c6c-a687-b4ddfe356ded-logs\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.851471 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-scripts\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.854301 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-config-data\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.855687 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192312c-1396-4c6c-a687-b4ddfe356ded-logs\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.867015 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d192312c-1396-4c6c-a687-b4ddfe356ded-horizon-secret-key\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.875403 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vrwnn" podStartSLOduration=3.875365613 podStartE2EDuration="3.875365613s" podCreationTimestamp="2026-02-28 03:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:17.799862895 +0000 UTC m=+1112.463902204" watchObservedRunningTime="2026-02-28 03:54:17.875365613 +0000 UTC m=+1112.539404922" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.892215 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqx5s\" (UniqueName: \"kubernetes.io/projected/d192312c-1396-4c6c-a687-b4ddfe356ded-kube-api-access-tqx5s\") pod \"horizon-6697794ff-9xhd2\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:17 crc kubenswrapper[4624]: I0228 03:54:17.929276 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:54:18 crc kubenswrapper[4624]: I0228 03:54:18.112890 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4" path="/var/lib/kubelet/pods/9fc4d49f-d62f-494a-b8c0-6fde8cecd2b4/volumes" Feb 28 03:54:18 crc kubenswrapper[4624]: I0228 03:54:18.683527 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6697794ff-9xhd2"] Feb 28 03:54:18 crc kubenswrapper[4624]: I0228 03:54:18.811006 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" event={"ID":"df7cc5ba-b521-4349-8306-35d633072cef","Type":"ContainerStarted","Data":"80deb2970ff82709b20527153c827237964e035a24da3c743cac2918bc221d2d"} Feb 28 03:54:18 crc kubenswrapper[4624]: I0228 03:54:18.811444 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:18 crc kubenswrapper[4624]: I0228 03:54:18.835456 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" podStartSLOduration=4.835426303 podStartE2EDuration="4.835426303s" podCreationTimestamp="2026-02-28 03:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:18.833641394 +0000 UTC m=+1113.497680703" watchObservedRunningTime="2026-02-28 03:54:18.835426303 +0000 UTC m=+1113.499465612" Feb 28 03:54:19 crc kubenswrapper[4624]: I0228 03:54:19.813633 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6697794ff-9xhd2" event={"ID":"d192312c-1396-4c6c-a687-b4ddfe356ded","Type":"ContainerStarted","Data":"950ef5aff74ba54a72f723f5177952ecd245b60824ee05b8fd0b7f3e9f0a3324"} Feb 28 03:54:21 crc kubenswrapper[4624]: I0228 03:54:21.846953 4624 generic.go:334] "Generic (PLEG): container finished" podID="e987d56b-dcae-4f73-8e96-9010674f3c4e" containerID="f6ce4a2c70a6813aff13e07dc29f496b6b87405812f7cf36e2d1098ff78481c3" exitCode=0 Feb 28 03:54:21 crc kubenswrapper[4624]: I0228 03:54:21.847221 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmxqw" event={"ID":"e987d56b-dcae-4f73-8e96-9010674f3c4e","Type":"ContainerDied","Data":"f6ce4a2c70a6813aff13e07dc29f496b6b87405812f7cf36e2d1098ff78481c3"} Feb 28 03:54:22 crc kubenswrapper[4624]: I0228 03:54:22.869535 4624 generic.go:334] "Generic (PLEG): container finished" podID="397266b0-e2fd-4735-b799-d9c15c78a2cc" containerID="2eddc4df89acc270d9d8a7160b4029944058c3e59c39dbb6367ad09756366451" exitCode=0 Feb 28 03:54:22 crc kubenswrapper[4624]: I0228 03:54:22.869612 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5h4h" event={"ID":"397266b0-e2fd-4735-b799-d9c15c78a2cc","Type":"ContainerDied","Data":"2eddc4df89acc270d9d8a7160b4029944058c3e59c39dbb6367ad09756366451"} Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.746076 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78cb4b6465-msstz"] Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.785935 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b4bc59cd8-fkd4p"] Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.787577 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.795943 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.838361 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4bc59cd8-fkd4p"] Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.892855 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6697794ff-9xhd2"] Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920728 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-combined-ca-bundle\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920824 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdpxz\" (UniqueName: \"kubernetes.io/projected/ca1103dd-2624-40c7-9cc4-cf55c51633a2-kube-api-access-zdpxz\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920864 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-secret-key\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920902 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-config-data\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920933 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-scripts\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920956 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-tls-certs\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.920976 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca1103dd-2624-40c7-9cc4-cf55c51633a2-logs\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.947965 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cc988c5cd-svksm"] Feb 28 03:54:23 crc kubenswrapper[4624]: I0228 03:54:23.949982 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026418 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-scripts\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026520 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ccc2a9a-c3cc-4ddb-a700-86713957337e-scripts\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026561 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-tls-certs\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026586 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca1103dd-2624-40c7-9cc4-cf55c51633a2-logs\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026645 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcxt7\" (UniqueName: \"kubernetes.io/projected/6ccc2a9a-c3cc-4ddb-a700-86713957337e-kube-api-access-wcxt7\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026678 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ccc2a9a-c3cc-4ddb-a700-86713957337e-config-data\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026725 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-combined-ca-bundle\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026921 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdpxz\" (UniqueName: \"kubernetes.io/projected/ca1103dd-2624-40c7-9cc4-cf55c51633a2-kube-api-access-zdpxz\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.026962 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-combined-ca-bundle\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.027003 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-horizon-tls-certs\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.027041 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-secret-key\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.027096 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-horizon-secret-key\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.027148 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc2a9a-c3cc-4ddb-a700-86713957337e-logs\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.027195 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-config-data\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.038739 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-scripts\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.039025 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca1103dd-2624-40c7-9cc4-cf55c51633a2-logs\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.053688 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-config-data\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.054510 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-combined-ca-bundle\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.062355 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-tls-certs\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.068997 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-secret-key\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.082053 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdpxz\" (UniqueName: \"kubernetes.io/projected/ca1103dd-2624-40c7-9cc4-cf55c51633a2-kube-api-access-zdpxz\") pod \"horizon-5b4bc59cd8-fkd4p\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.111737 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc988c5cd-svksm"] Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.129773 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcxt7\" (UniqueName: \"kubernetes.io/projected/6ccc2a9a-c3cc-4ddb-a700-86713957337e-kube-api-access-wcxt7\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.129849 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ccc2a9a-c3cc-4ddb-a700-86713957337e-config-data\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.129913 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-combined-ca-bundle\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.129941 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-horizon-tls-certs\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.129967 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-horizon-secret-key\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.129998 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc2a9a-c3cc-4ddb-a700-86713957337e-logs\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.130036 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ccc2a9a-c3cc-4ddb-a700-86713957337e-scripts\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.141199 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ccc2a9a-c3cc-4ddb-a700-86713957337e-scripts\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.141597 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc2a9a-c3cc-4ddb-a700-86713957337e-logs\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.142601 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ccc2a9a-c3cc-4ddb-a700-86713957337e-config-data\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.143156 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.147555 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-horizon-secret-key\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.147952 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-combined-ca-bundle\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.151007 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ccc2a9a-c3cc-4ddb-a700-86713957337e-horizon-tls-certs\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.192616 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcxt7\" (UniqueName: \"kubernetes.io/projected/6ccc2a9a-c3cc-4ddb-a700-86713957337e-kube-api-access-wcxt7\") pod \"horizon-6cc988c5cd-svksm\" (UID: \"6ccc2a9a-c3cc-4ddb-a700-86713957337e\") " pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:24 crc kubenswrapper[4624]: I0228 03:54:24.320792 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:54:25 crc kubenswrapper[4624]: I0228 03:54:25.363162 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:54:25 crc kubenswrapper[4624]: I0228 03:54:25.433787 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4fhlg"] Feb 28 03:54:25 crc kubenswrapper[4624]: I0228 03:54:25.434047 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" containerID="cri-o://772af69066771ce1f1d37e96334fecb39c7f80fd096300cb0b8cceef5c5486b2" gracePeriod=10 Feb 28 03:54:25 crc kubenswrapper[4624]: I0228 03:54:25.910406 4624 generic.go:334] "Generic (PLEG): container finished" podID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerID="772af69066771ce1f1d37e96334fecb39c7f80fd096300cb0b8cceef5c5486b2" exitCode=0 Feb 28 03:54:25 crc kubenswrapper[4624]: I0228 03:54:25.910457 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" event={"ID":"fa0cd34b-c733-4703-9728-db6d9c69f888","Type":"ContainerDied","Data":"772af69066771ce1f1d37e96334fecb39c7f80fd096300cb0b8cceef5c5486b2"} Feb 28 03:54:28 crc kubenswrapper[4624]: I0228 03:54:28.536148 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 28 03:54:32 crc kubenswrapper[4624]: I0228 03:54:32.929829 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmxqw" Feb 28 03:54:32 crc kubenswrapper[4624]: I0228 03:54:32.947178 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:32 crc kubenswrapper[4624]: I0228 03:54:32.986546 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-db-sync-config-data\") pod \"e987d56b-dcae-4f73-8e96-9010674f3c4e\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " Feb 28 03:54:32 crc kubenswrapper[4624]: I0228 03:54:32.986601 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-combined-ca-bundle\") pod \"e987d56b-dcae-4f73-8e96-9010674f3c4e\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " Feb 28 03:54:32 crc kubenswrapper[4624]: I0228 03:54:32.986664 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-config-data\") pod \"e987d56b-dcae-4f73-8e96-9010674f3c4e\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " Feb 28 03:54:32 crc kubenswrapper[4624]: I0228 03:54:32.986718 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8blt7\" (UniqueName: \"kubernetes.io/projected/e987d56b-dcae-4f73-8e96-9010674f3c4e-kube-api-access-8blt7\") pod \"e987d56b-dcae-4f73-8e96-9010674f3c4e\" (UID: \"e987d56b-dcae-4f73-8e96-9010674f3c4e\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.018266 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e987d56b-dcae-4f73-8e96-9010674f3c4e-kube-api-access-8blt7" (OuterVolumeSpecName: "kube-api-access-8blt7") pod "e987d56b-dcae-4f73-8e96-9010674f3c4e" (UID: "e987d56b-dcae-4f73-8e96-9010674f3c4e"). InnerVolumeSpecName "kube-api-access-8blt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.026665 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e987d56b-dcae-4f73-8e96-9010674f3c4e" (UID: "e987d56b-dcae-4f73-8e96-9010674f3c4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.035715 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e987d56b-dcae-4f73-8e96-9010674f3c4e" (UID: "e987d56b-dcae-4f73-8e96-9010674f3c4e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.037151 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5h4h" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.037252 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5h4h" event={"ID":"397266b0-e2fd-4735-b799-d9c15c78a2cc","Type":"ContainerDied","Data":"baf721eb94f351a1752d1831c00886066d7800bb18689c82dd67a51f11031cc5"} Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.037449 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf721eb94f351a1752d1831c00886066d7800bb18689c82dd67a51f11031cc5" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.040747 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmxqw" event={"ID":"e987d56b-dcae-4f73-8e96-9010674f3c4e","Type":"ContainerDied","Data":"0b340ef2159026b325e51f2100b4d95eeea0643ca5aaef65af9cbf48615df78d"} Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.040990 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmxqw" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.041796 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b340ef2159026b325e51f2100b4d95eeea0643ca5aaef65af9cbf48615df78d" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.092159 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-credential-keys\") pod \"397266b0-e2fd-4735-b799-d9c15c78a2cc\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.093136 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-fernet-keys\") pod \"397266b0-e2fd-4735-b799-d9c15c78a2cc\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.093549 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-combined-ca-bundle\") pod \"397266b0-e2fd-4735-b799-d9c15c78a2cc\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.093580 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-config-data\") pod \"397266b0-e2fd-4735-b799-d9c15c78a2cc\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.093690 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdkg6\" (UniqueName: \"kubernetes.io/projected/397266b0-e2fd-4735-b799-d9c15c78a2cc-kube-api-access-zdkg6\") pod \"397266b0-e2fd-4735-b799-d9c15c78a2cc\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.093763 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-scripts\") pod \"397266b0-e2fd-4735-b799-d9c15c78a2cc\" (UID: \"397266b0-e2fd-4735-b799-d9c15c78a2cc\") " Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.094450 4624 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.094467 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.094479 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8blt7\" (UniqueName: \"kubernetes.io/projected/e987d56b-dcae-4f73-8e96-9010674f3c4e-kube-api-access-8blt7\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.120040 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397266b0-e2fd-4735-b799-d9c15c78a2cc-kube-api-access-zdkg6" (OuterVolumeSpecName: "kube-api-access-zdkg6") pod "397266b0-e2fd-4735-b799-d9c15c78a2cc" (UID: "397266b0-e2fd-4735-b799-d9c15c78a2cc"). InnerVolumeSpecName "kube-api-access-zdkg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.121726 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "397266b0-e2fd-4735-b799-d9c15c78a2cc" (UID: "397266b0-e2fd-4735-b799-d9c15c78a2cc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.125851 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "397266b0-e2fd-4735-b799-d9c15c78a2cc" (UID: "397266b0-e2fd-4735-b799-d9c15c78a2cc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.125923 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-scripts" (OuterVolumeSpecName: "scripts") pod "397266b0-e2fd-4735-b799-d9c15c78a2cc" (UID: "397266b0-e2fd-4735-b799-d9c15c78a2cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.130092 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-config-data" (OuterVolumeSpecName: "config-data") pod "e987d56b-dcae-4f73-8e96-9010674f3c4e" (UID: "e987d56b-dcae-4f73-8e96-9010674f3c4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.159543 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-config-data" (OuterVolumeSpecName: "config-data") pod "397266b0-e2fd-4735-b799-d9c15c78a2cc" (UID: "397266b0-e2fd-4735-b799-d9c15c78a2cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.172378 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "397266b0-e2fd-4735-b799-d9c15c78a2cc" (UID: "397266b0-e2fd-4735-b799-d9c15c78a2cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198209 4624 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198254 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e987d56b-dcae-4f73-8e96-9010674f3c4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198264 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198273 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198284 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198294 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdkg6\" (UniqueName: \"kubernetes.io/projected/397266b0-e2fd-4735-b799-d9c15c78a2cc-kube-api-access-zdkg6\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:33 crc kubenswrapper[4624]: I0228 03:54:33.198303 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/397266b0-e2fd-4735-b799-d9c15c78a2cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.170043 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d5h4h"] Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.184437 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d5h4h"] Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.318864 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mkzf4"] Feb 28 03:54:34 crc kubenswrapper[4624]: E0228 03:54:34.319649 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e987d56b-dcae-4f73-8e96-9010674f3c4e" containerName="glance-db-sync" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.319678 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e987d56b-dcae-4f73-8e96-9010674f3c4e" containerName="glance-db-sync" Feb 28 03:54:34 crc kubenswrapper[4624]: E0228 03:54:34.319707 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397266b0-e2fd-4735-b799-d9c15c78a2cc" containerName="keystone-bootstrap" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.319716 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="397266b0-e2fd-4735-b799-d9c15c78a2cc" containerName="keystone-bootstrap" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.319982 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e987d56b-dcae-4f73-8e96-9010674f3c4e" containerName="glance-db-sync" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.320024 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="397266b0-e2fd-4735-b799-d9c15c78a2cc" containerName="keystone-bootstrap" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.321105 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.330168 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.330489 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.333788 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkzf4"] Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.334791 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.335221 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.335475 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bkrfd" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.455175 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjh9v\" (UniqueName: \"kubernetes.io/projected/12eae8a2-7f1a-447e-afbc-30bc3760f6df-kube-api-access-rjh9v\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.455221 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-scripts\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.455259 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-config-data\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.455357 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-credential-keys\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.455380 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-combined-ca-bundle\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.455551 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-fernet-keys\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.558702 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjh9v\" (UniqueName: \"kubernetes.io/projected/12eae8a2-7f1a-447e-afbc-30bc3760f6df-kube-api-access-rjh9v\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.558747 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-scripts\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.558772 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-config-data\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.558820 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-credential-keys\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.558843 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-combined-ca-bundle\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.558914 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-fernet-keys\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.574290 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-credential-keys\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.577985 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-config-data\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.578939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-combined-ca-bundle\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.579138 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-scripts\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.605100 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-fernet-keys\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.627957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjh9v\" (UniqueName: \"kubernetes.io/projected/12eae8a2-7f1a-447e-afbc-30bc3760f6df-kube-api-access-rjh9v\") pod \"keystone-bootstrap-mkzf4\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.671673 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.683451 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-46bmz"] Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.685207 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.729506 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-46bmz"] Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.775920 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.776013 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.776033 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.776142 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.776182 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-config\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.776203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xc5\" (UniqueName: \"kubernetes.io/projected/e27f535b-910c-4a13-989f-de13019d4a7d-kube-api-access-58xc5\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.880389 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.880731 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.880756 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.881979 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.882025 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-config\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.882046 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xc5\" (UniqueName: \"kubernetes.io/projected/e27f535b-910c-4a13-989f-de13019d4a7d-kube-api-access-58xc5\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.881862 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.883027 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.883982 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-config\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.883982 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.884404 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:34 crc kubenswrapper[4624]: I0228 03:54:34.905803 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xc5\" (UniqueName: \"kubernetes.io/projected/e27f535b-910c-4a13-989f-de13019d4a7d-kube-api-access-58xc5\") pod \"dnsmasq-dns-57c957c4ff-46bmz\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.047748 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.404868 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.407240 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.410952 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.411855 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.412656 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ptb94" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.439120 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498524 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498654 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498712 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2zk\" (UniqueName: \"kubernetes.io/projected/ed303757-67e3-41bc-a252-d37da81b5258-kube-api-access-jw2zk\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-logs\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498760 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498798 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.498909 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.600262 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-logs\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.600850 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.600318 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.600917 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.600913 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-logs\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.601169 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.601288 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.601485 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.601593 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw2zk\" (UniqueName: \"kubernetes.io/projected/ed303757-67e3-41bc-a252-d37da81b5258-kube-api-access-jw2zk\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.602067 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.607960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.611787 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.624233 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw2zk\" (UniqueName: \"kubernetes.io/projected/ed303757-67e3-41bc-a252-d37da81b5258-kube-api-access-jw2zk\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.630169 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.631909 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " pod="openstack/glance-default-external-api-0" Feb 28 03:54:35 crc kubenswrapper[4624]: I0228 03:54:35.732697 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.082352 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.084486 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.088169 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.138296 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397266b0-e2fd-4735-b799-d9c15c78a2cc" path="/var/lib/kubelet/pods/397266b0-e2fd-4735-b799-d9c15c78a2cc/volumes" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.139449 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.214758 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.214817 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.215141 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.215247 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-logs\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.215389 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg9bs\" (UniqueName: \"kubernetes.io/projected/061053fe-7564-4fbc-8b09-93fea568b77e-kube-api-access-bg9bs\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.215469 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.215738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318622 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg9bs\" (UniqueName: \"kubernetes.io/projected/061053fe-7564-4fbc-8b09-93fea568b77e-kube-api-access-bg9bs\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318694 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318738 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318822 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318922 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.318951 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-logs\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.319446 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.319575 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-logs\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.320705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.325656 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.333182 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.334705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.352435 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg9bs\" (UniqueName: \"kubernetes.io/projected/061053fe-7564-4fbc-8b09-93fea568b77e-kube-api-access-bg9bs\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.356037 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:54:36 crc kubenswrapper[4624]: I0228 03:54:36.417155 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:54:37 crc kubenswrapper[4624]: I0228 03:54:37.350788 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:54:37 crc kubenswrapper[4624]: I0228 03:54:37.493346 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:54:38 crc kubenswrapper[4624]: I0228 03:54:38.535919 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Feb 28 03:54:43 crc kubenswrapper[4624]: I0228 03:54:43.536891 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Feb 28 03:54:43 crc kubenswrapper[4624]: I0228 03:54:43.537576 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:54:45 crc kubenswrapper[4624]: I0228 03:54:45.201409 4624 generic.go:334] "Generic (PLEG): container finished" podID="fb1928c8-43f4-46a7-997e-baa034bb94d8" containerID="44dd7334941203acec54576509f8450f7c271f99fc620caa6798bd650cbdecac" exitCode=0 Feb 28 03:54:45 crc kubenswrapper[4624]: I0228 03:54:45.201739 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrwnn" event={"ID":"fb1928c8-43f4-46a7-997e-baa034bb94d8","Type":"ContainerDied","Data":"44dd7334941203acec54576509f8450f7c271f99fc620caa6798bd650cbdecac"} Feb 28 03:54:48 crc kubenswrapper[4624]: I0228 03:54:48.540353 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Feb 28 03:54:49 crc kubenswrapper[4624]: I0228 03:54:49.540292 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:54:49 crc kubenswrapper[4624]: I0228 03:54:49.540782 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.273677 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" event={"ID":"fa0cd34b-c733-4703-9728-db6d9c69f888","Type":"ContainerDied","Data":"de9b33658f1aae69ca2ddfcef16c67eec53a38a0fc6a1b5aa7eb5f5af2ff6175"} Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.274250 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de9b33658f1aae69ca2ddfcef16c67eec53a38a0fc6a1b5aa7eb5f5af2ff6175" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.309207 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.400659 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-config\") pod \"fa0cd34b-c733-4703-9728-db6d9c69f888\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.401263 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-nb\") pod \"fa0cd34b-c733-4703-9728-db6d9c69f888\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.401339 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-svc\") pod \"fa0cd34b-c733-4703-9728-db6d9c69f888\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.401431 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mh6\" (UniqueName: \"kubernetes.io/projected/fa0cd34b-c733-4703-9728-db6d9c69f888-kube-api-access-88mh6\") pod \"fa0cd34b-c733-4703-9728-db6d9c69f888\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.401491 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-sb\") pod \"fa0cd34b-c733-4703-9728-db6d9c69f888\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.401602 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-swift-storage-0\") pod \"fa0cd34b-c733-4703-9728-db6d9c69f888\" (UID: \"fa0cd34b-c733-4703-9728-db6d9c69f888\") " Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.420176 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0cd34b-c733-4703-9728-db6d9c69f888-kube-api-access-88mh6" (OuterVolumeSpecName: "kube-api-access-88mh6") pod "fa0cd34b-c733-4703-9728-db6d9c69f888" (UID: "fa0cd34b-c733-4703-9728-db6d9c69f888"). InnerVolumeSpecName "kube-api-access-88mh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.457883 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-config" (OuterVolumeSpecName: "config") pod "fa0cd34b-c733-4703-9728-db6d9c69f888" (UID: "fa0cd34b-c733-4703-9728-db6d9c69f888"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.471809 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa0cd34b-c733-4703-9728-db6d9c69f888" (UID: "fa0cd34b-c733-4703-9728-db6d9c69f888"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.476708 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa0cd34b-c733-4703-9728-db6d9c69f888" (UID: "fa0cd34b-c733-4703-9728-db6d9c69f888"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.482980 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa0cd34b-c733-4703-9728-db6d9c69f888" (UID: "fa0cd34b-c733-4703-9728-db6d9c69f888"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.488519 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa0cd34b-c733-4703-9728-db6d9c69f888" (UID: "fa0cd34b-c733-4703-9728-db6d9c69f888"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.504173 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88mh6\" (UniqueName: \"kubernetes.io/projected/fa0cd34b-c733-4703-9728-db6d9c69f888-kube-api-access-88mh6\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.504215 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.504225 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.504237 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.504246 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.504256 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0cd34b-c733-4703-9728-db6d9c69f888-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:52 crc kubenswrapper[4624]: E0228 03:54:52.700517 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 28 03:54:52 crc kubenswrapper[4624]: E0228 03:54:52.700737 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d5hf9h5b7h55bh7dh5dbh65h65fh5dbh694h677h657h56dhcbh5bdh5c5hfh64ch5cch568hcfh8fh5f4hd5h574h584h654h58bh5b5h65h54dh578q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fhqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2bf41691-ef23-4f33-83f0-ebd9c2ca1d87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:54:52 crc kubenswrapper[4624]: I0228 03:54:52.705226 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 03:54:53 crc kubenswrapper[4624]: E0228 03:54:53.275413 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 28 03:54:53 crc kubenswrapper[4624]: E0228 03:54:53.275691 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lkzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-l9l75_openstack(34b0ae2c-3cbb-419b-8214-739eea04c9a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:54:53 crc kubenswrapper[4624]: E0228 03:54:53.277452 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-l9l75" podUID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.286797 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.460581 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.473802 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4fhlg"] Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.487392 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-4fhlg"] Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.536792 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-combined-ca-bundle\") pod \"fb1928c8-43f4-46a7-997e-baa034bb94d8\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.536884 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d25b5\" (UniqueName: \"kubernetes.io/projected/fb1928c8-43f4-46a7-997e-baa034bb94d8-kube-api-access-d25b5\") pod \"fb1928c8-43f4-46a7-997e-baa034bb94d8\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.537177 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-config\") pod \"fb1928c8-43f4-46a7-997e-baa034bb94d8\" (UID: \"fb1928c8-43f4-46a7-997e-baa034bb94d8\") " Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.542109 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-4fhlg" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.568398 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1928c8-43f4-46a7-997e-baa034bb94d8-kube-api-access-d25b5" (OuterVolumeSpecName: "kube-api-access-d25b5") pod "fb1928c8-43f4-46a7-997e-baa034bb94d8" (UID: "fb1928c8-43f4-46a7-997e-baa034bb94d8"). InnerVolumeSpecName "kube-api-access-d25b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.600772 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-config" (OuterVolumeSpecName: "config") pod "fb1928c8-43f4-46a7-997e-baa034bb94d8" (UID: "fb1928c8-43f4-46a7-997e-baa034bb94d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.613734 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb1928c8-43f4-46a7-997e-baa034bb94d8" (UID: "fb1928c8-43f4-46a7-997e-baa034bb94d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.639729 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.639764 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1928c8-43f4-46a7-997e-baa034bb94d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.639776 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d25b5\" (UniqueName: \"kubernetes.io/projected/fb1928c8-43f4-46a7-997e-baa034bb94d8-kube-api-access-d25b5\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:53 crc kubenswrapper[4624]: I0228 03:54:53.840392 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4bc59cd8-fkd4p"] Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.102914 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" path="/var/lib/kubelet/pods/fa0cd34b-c733-4703-9728-db6d9c69f888/volumes" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.303458 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrwnn" event={"ID":"fb1928c8-43f4-46a7-997e-baa034bb94d8","Type":"ContainerDied","Data":"5472dbb9612b80412c76a09fc5381efc6548ad0f1303036c9424c02c47a5ebcf"} Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.303507 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrwnn" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.303533 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5472dbb9612b80412c76a09fc5381efc6548ad0f1303036c9424c02c47a5ebcf" Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.309402 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-l9l75" podUID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.558840 4624 scope.go:117] "RemoveContainer" containerID="b04e905b52161116f68e479dcebabfd6d116ea9718a06e87fdaf96e6907b6c63" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.755020 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-46bmz"] Feb 28 03:54:54 crc kubenswrapper[4624]: W0228 03:54:54.840103 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1103dd_2624_40c7_9cc4_cf55c51633a2.slice/crio-4550eb09f88ea82cec387b4a41cbfcf07033a4848ee90e0a29e68c9324eb5cf9 WatchSource:0}: Error finding container 4550eb09f88ea82cec387b4a41cbfcf07033a4848ee90e0a29e68c9324eb5cf9: Status 404 returned error can't find the container with id 4550eb09f88ea82cec387b4a41cbfcf07033a4848ee90e0a29e68c9324eb5cf9 Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.905255 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-w7g2j"] Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.905743 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="init" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.905762 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="init" Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.905774 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.905780 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.905789 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1928c8-43f4-46a7-997e-baa034bb94d8" containerName="neutron-db-sync" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.905797 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1928c8-43f4-46a7-997e-baa034bb94d8" containerName="neutron-db-sync" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.905989 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1928c8-43f4-46a7-997e-baa034bb94d8" containerName="neutron-db-sync" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.906011 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0cd34b-c733-4703-9728-db6d9c69f888" containerName="dnsmasq-dns" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.907189 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.943039 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-w7g2j"] Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.970523 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.970709 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtx6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8tlzl_openstack(0d169c37-cd26-4e66-8f96-d0a53a96d616): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:54:54 crc kubenswrapper[4624]: E0228 03:54:54.975174 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8tlzl" podUID="0d169c37-cd26-4e66-8f96-d0a53a96d616" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.988047 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.988138 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.988178 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-config\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.988203 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.988243 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:54 crc kubenswrapper[4624]: I0228 03:54:54.988305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9nt4\" (UniqueName: \"kubernetes.io/projected/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-kube-api-access-h9nt4\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.070677 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b75fb948d-dzc9p"] Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.117527 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9nt4\" (UniqueName: \"kubernetes.io/projected/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-kube-api-access-h9nt4\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.143029 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.143331 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.143443 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-config\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.143495 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.143619 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.145914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.146488 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.147408 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.175627 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-config\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.180132 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.189582 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9nt4\" (UniqueName: \"kubernetes.io/projected/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-kube-api-access-h9nt4\") pod \"dnsmasq-dns-5ccc5c4795-w7g2j\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.265236 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b75fb948d-dzc9p"] Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.273021 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.274765 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.280307 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9xwxm" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.280412 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.287759 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.307039 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.378116 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-httpd-config\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.378578 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-ovndb-tls-certs\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.378638 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-combined-ca-bundle\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.379630 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5thz\" (UniqueName: \"kubernetes.io/projected/e5f3e502-df83-4a0e-8240-53d8d6d78a80-kube-api-access-p5thz\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.379739 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-config\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.380163 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerStarted","Data":"4550eb09f88ea82cec387b4a41cbfcf07033a4848ee90e0a29e68c9324eb5cf9"} Feb 28 03:54:55 crc kubenswrapper[4624]: E0228 03:54:55.388755 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8tlzl" podUID="0d169c37-cd26-4e66-8f96-d0a53a96d616" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.483235 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-httpd-config\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.483325 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-ovndb-tls-certs\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.483357 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-combined-ca-bundle\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.483374 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5thz\" (UniqueName: \"kubernetes.io/projected/e5f3e502-df83-4a0e-8240-53d8d6d78a80-kube-api-access-p5thz\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.483421 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-config\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.494983 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-ovndb-tls-certs\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.502744 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-combined-ca-bundle\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.518606 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-config\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.520743 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-httpd-config\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.525127 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5thz\" (UniqueName: \"kubernetes.io/projected/e5f3e502-df83-4a0e-8240-53d8d6d78a80-kube-api-access-p5thz\") pod \"neutron-5b75fb948d-dzc9p\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.631616 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.708002 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc988c5cd-svksm"] Feb 28 03:54:55 crc kubenswrapper[4624]: I0228 03:54:55.839322 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.047806 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkzf4"] Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.167103 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.213666 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.392811 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkzf4" event={"ID":"12eae8a2-7f1a-447e-afbc-30bc3760f6df","Type":"ContainerStarted","Data":"e046d7f13184d20e0bde2769363dfc7aaff8a8b882b1f53d10e89791430ac961"} Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.395365 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerStarted","Data":"4f2bffe7d94088f677f0d9a9bcf1c956cb5f899404b546e0cc9107b0a95e54bd"} Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.396744 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"061053fe-7564-4fbc-8b09-93fea568b77e","Type":"ContainerStarted","Data":"dd699482354330ef106e3d8e8f78de5abfbce6419e3e948a011ced04336eb6a2"} Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.398655 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed303757-67e3-41bc-a252-d37da81b5258","Type":"ContainerStarted","Data":"4e607a3b7069bc86d7aa8b9c5ce5f160bad4f3c69f9c2205d0dcff29d28fadb8"} Feb 28 03:54:56 crc kubenswrapper[4624]: I0228 03:54:56.621651 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-46bmz"] Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.320390 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b75fb948d-dzc9p"] Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.342493 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-w7g2j"] Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.506680 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7954bdb9b9-dwqsd"] Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.508552 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.520506 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.520643 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.531541 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7954bdb9b9-dwqsd"] Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.544117 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5cbd76fc-29hwp" event={"ID":"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b","Type":"ContainerStarted","Data":"87f6910ac28a0d1e4d386ad296e664e46c67ec2e729f80ec950b038a1b12e733"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586714 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-combined-ca-bundle\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586804 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-public-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586828 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-config\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586862 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-internal-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586901 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-httpd-config\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586952 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-ovndb-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.586976 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtbp\" (UniqueName: \"kubernetes.io/projected/e5862b3f-4b92-4bcc-8d77-4585e53475a8-kube-api-access-zxtbp\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.618689 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6697794ff-9xhd2" event={"ID":"d192312c-1396-4c6c-a687-b4ddfe356ded","Type":"ContainerStarted","Data":"cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.622957 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb4b6465-msstz" event={"ID":"69bc742a-a80d-43f4-90cc-993a14f7dbd5","Type":"ContainerStarted","Data":"2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.628497 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerStarted","Data":"3e3c12eff32c05384ee79bc94df59a47e32f5be9508851ce1817613b7f9e999d"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.638330 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7zvf6" event={"ID":"cdc81234-8d71-4da8-821f-62f79823de92","Type":"ContainerStarted","Data":"a09c59dbca07f1f63d63b0fe0b2e5b756fbbbfc13e082ea108ff7d2581f2ec01"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.641122 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" event={"ID":"e27f535b-910c-4a13-989f-de13019d4a7d","Type":"ContainerStarted","Data":"a5c795edb229084480324d6f84b92da6b528d15de9eac93209b4f8cc88bd6151"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.646315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerStarted","Data":"23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.648582 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkzf4" event={"ID":"12eae8a2-7f1a-447e-afbc-30bc3760f6df","Type":"ContainerStarted","Data":"f5d0ebf42cb55ccfef7a1d5efc023a3d26f7b91a7f65e9a75be2ca942b7f11e2"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.652061 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" event={"ID":"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0","Type":"ContainerStarted","Data":"35a060c1c8e9c4b3045b90d486b2afef056ab50fb25cef2de3a2d4ec1d709d85"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.661354 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b75fb948d-dzc9p" event={"ID":"e5f3e502-df83-4a0e-8240-53d8d6d78a80","Type":"ContainerStarted","Data":"9fefb793d8a707565be6a2404b1be2a72d324759c12bd748a4ad66d254dd6c43"} Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.683180 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mkzf4" podStartSLOduration=23.683161162 podStartE2EDuration="23.683161162s" podCreationTimestamp="2026-02-28 03:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:57.679161234 +0000 UTC m=+1152.343200543" watchObservedRunningTime="2026-02-28 03:54:57.683161162 +0000 UTC m=+1152.347200481" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.688910 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7zvf6" podStartSLOduration=7.01439562 podStartE2EDuration="43.68889557s" podCreationTimestamp="2026-02-28 03:54:14 +0000 UTC" firstStartedPulling="2026-02-28 03:54:16.607348351 +0000 UTC m=+1111.271387660" lastFinishedPulling="2026-02-28 03:54:53.281848301 +0000 UTC m=+1147.945887610" observedRunningTime="2026-02-28 03:54:57.665330775 +0000 UTC m=+1152.329370074" watchObservedRunningTime="2026-02-28 03:54:57.68889557 +0000 UTC m=+1152.352934879" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.691916 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-combined-ca-bundle\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.692108 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-public-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.692130 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-config\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.692207 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-internal-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.692293 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-httpd-config\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.692358 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-ovndb-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.692379 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtbp\" (UniqueName: \"kubernetes.io/projected/e5862b3f-4b92-4bcc-8d77-4585e53475a8-kube-api-access-zxtbp\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.700854 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-ovndb-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.703452 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-internal-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.710745 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-public-tls-certs\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.720136 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-httpd-config\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.720795 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-config\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.729912 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-combined-ca-bundle\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.739005 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtbp\" (UniqueName: \"kubernetes.io/projected/e5862b3f-4b92-4bcc-8d77-4585e53475a8-kube-api-access-zxtbp\") pod \"neutron-7954bdb9b9-dwqsd\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:57 crc kubenswrapper[4624]: I0228 03:54:57.898678 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.709196 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b75fb948d-dzc9p" event={"ID":"e5f3e502-df83-4a0e-8240-53d8d6d78a80","Type":"ContainerStarted","Data":"f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.720487 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed303757-67e3-41bc-a252-d37da81b5258","Type":"ContainerStarted","Data":"e02e5331fd3996f4faf92f59864ecc1139c31b5ccd35ba189782cfb5124b1052"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.735691 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87","Type":"ContainerStarted","Data":"42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.742701 4624 generic.go:334] "Generic (PLEG): container finished" podID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerID="300ad89ebdfc43ba6353036a3e2086a83c22deec533c32917dbcd16d1ab6be93" exitCode=0 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.743311 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" event={"ID":"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0","Type":"ContainerDied","Data":"300ad89ebdfc43ba6353036a3e2086a83c22deec533c32917dbcd16d1ab6be93"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.760419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb4b6465-msstz" event={"ID":"69bc742a-a80d-43f4-90cc-993a14f7dbd5","Type":"ContainerStarted","Data":"0ff56fa1719908852e8484a7ceaa2041362f9a20c8610502428f676a606e3435"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.760626 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78cb4b6465-msstz" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon-log" containerID="cri-o://2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44" gracePeriod=30 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.760928 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78cb4b6465-msstz" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon" containerID="cri-o://0ff56fa1719908852e8484a7ceaa2041362f9a20c8610502428f676a606e3435" gracePeriod=30 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.765808 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerStarted","Data":"5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.775140 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"061053fe-7564-4fbc-8b09-93fea568b77e","Type":"ContainerStarted","Data":"326d330c54445bd9e64549f8b30ab483b045f138c92e5c8ca397eb281e7d7876"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.800846 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5cbd76fc-29hwp" event={"ID":"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b","Type":"ContainerStarted","Data":"5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.801041 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c5cbd76fc-29hwp" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon-log" containerID="cri-o://87f6910ac28a0d1e4d386ad296e664e46c67ec2e729f80ec950b038a1b12e733" gracePeriod=30 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.802689 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c5cbd76fc-29hwp" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon" containerID="cri-o://5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a" gracePeriod=30 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.878894 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78cb4b6465-msstz" podStartSLOduration=8.17753578 podStartE2EDuration="44.878874265s" podCreationTimestamp="2026-02-28 03:54:14 +0000 UTC" firstStartedPulling="2026-02-28 03:54:16.594119479 +0000 UTC m=+1111.258158788" lastFinishedPulling="2026-02-28 03:54:53.295457964 +0000 UTC m=+1147.959497273" observedRunningTime="2026-02-28 03:54:58.840617478 +0000 UTC m=+1153.504656787" watchObservedRunningTime="2026-02-28 03:54:58.878874265 +0000 UTC m=+1153.542913574" Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.884857 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cc988c5cd-svksm" podStartSLOduration=35.884832679 podStartE2EDuration="35.884832679s" podCreationTimestamp="2026-02-28 03:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:58.882003731 +0000 UTC m=+1153.546043040" watchObservedRunningTime="2026-02-28 03:54:58.884832679 +0000 UTC m=+1153.548871988" Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.886383 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6697794ff-9xhd2" event={"ID":"d192312c-1396-4c6c-a687-b4ddfe356ded","Type":"ContainerStarted","Data":"28426b33d1604a4113f611ec024970e510de3204f6c6077604b6d2a92d46ff21"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.886603 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6697794ff-9xhd2" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon-log" containerID="cri-o://cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6" gracePeriod=30 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.886935 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6697794ff-9xhd2" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon" containerID="cri-o://28426b33d1604a4113f611ec024970e510de3204f6c6077604b6d2a92d46ff21" gracePeriod=30 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.925692 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerStarted","Data":"0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163"} Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.951497 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c5cbd76fc-29hwp" podStartSLOduration=7.52142785 podStartE2EDuration="45.951473403s" podCreationTimestamp="2026-02-28 03:54:13 +0000 UTC" firstStartedPulling="2026-02-28 03:54:16.452304315 +0000 UTC m=+1111.116343624" lastFinishedPulling="2026-02-28 03:54:54.882349868 +0000 UTC m=+1149.546389177" observedRunningTime="2026-02-28 03:54:58.933586623 +0000 UTC m=+1153.597625932" watchObservedRunningTime="2026-02-28 03:54:58.951473403 +0000 UTC m=+1153.615512712" Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.952151 4624 generic.go:334] "Generic (PLEG): container finished" podID="e27f535b-910c-4a13-989f-de13019d4a7d" containerID="709be7226f9cd201ff3b97db5a54a7813f6a5db4cb671a14c1a86640dc231b9a" exitCode=0 Feb 28 03:54:58 crc kubenswrapper[4624]: I0228 03:54:58.952433 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" event={"ID":"e27f535b-910c-4a13-989f-de13019d4a7d","Type":"ContainerDied","Data":"709be7226f9cd201ff3b97db5a54a7813f6a5db4cb671a14c1a86640dc231b9a"} Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.075779 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6697794ff-9xhd2" podStartSLOduration=4.775538602 podStartE2EDuration="42.075717785s" podCreationTimestamp="2026-02-28 03:54:17 +0000 UTC" firstStartedPulling="2026-02-28 03:54:18.771492962 +0000 UTC m=+1113.435532271" lastFinishedPulling="2026-02-28 03:54:56.071672145 +0000 UTC m=+1150.735711454" observedRunningTime="2026-02-28 03:54:58.99992507 +0000 UTC m=+1153.663964379" watchObservedRunningTime="2026-02-28 03:54:59.075717785 +0000 UTC m=+1153.739757094" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.111449 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7954bdb9b9-dwqsd"] Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.211096 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b4bc59cd8-fkd4p" podStartSLOduration=36.211047472 podStartE2EDuration="36.211047472s" podCreationTimestamp="2026-02-28 03:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:54:59.042346362 +0000 UTC m=+1153.706385671" watchObservedRunningTime="2026-02-28 03:54:59.211047472 +0000 UTC m=+1153.875086781" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.612705 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.695710 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-config\") pod \"e27f535b-910c-4a13-989f-de13019d4a7d\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.695772 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-svc\") pod \"e27f535b-910c-4a13-989f-de13019d4a7d\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.695815 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-sb\") pod \"e27f535b-910c-4a13-989f-de13019d4a7d\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.695899 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58xc5\" (UniqueName: \"kubernetes.io/projected/e27f535b-910c-4a13-989f-de13019d4a7d-kube-api-access-58xc5\") pod \"e27f535b-910c-4a13-989f-de13019d4a7d\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.695946 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-nb\") pod \"e27f535b-910c-4a13-989f-de13019d4a7d\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.696013 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-swift-storage-0\") pod \"e27f535b-910c-4a13-989f-de13019d4a7d\" (UID: \"e27f535b-910c-4a13-989f-de13019d4a7d\") " Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.717407 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27f535b-910c-4a13-989f-de13019d4a7d-kube-api-access-58xc5" (OuterVolumeSpecName: "kube-api-access-58xc5") pod "e27f535b-910c-4a13-989f-de13019d4a7d" (UID: "e27f535b-910c-4a13-989f-de13019d4a7d"). InnerVolumeSpecName "kube-api-access-58xc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.799691 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58xc5\" (UniqueName: \"kubernetes.io/projected/e27f535b-910c-4a13-989f-de13019d4a7d-kube-api-access-58xc5\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.800305 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e27f535b-910c-4a13-989f-de13019d4a7d" (UID: "e27f535b-910c-4a13-989f-de13019d4a7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.810394 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e27f535b-910c-4a13-989f-de13019d4a7d" (UID: "e27f535b-910c-4a13-989f-de13019d4a7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.810684 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-config" (OuterVolumeSpecName: "config") pod "e27f535b-910c-4a13-989f-de13019d4a7d" (UID: "e27f535b-910c-4a13-989f-de13019d4a7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.810875 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e27f535b-910c-4a13-989f-de13019d4a7d" (UID: "e27f535b-910c-4a13-989f-de13019d4a7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.814457 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e27f535b-910c-4a13-989f-de13019d4a7d" (UID: "e27f535b-910c-4a13-989f-de13019d4a7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.909685 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.909716 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.909727 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.909736 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:54:59 crc kubenswrapper[4624]: I0228 03:54:59.909746 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e27f535b-910c-4a13-989f-de13019d4a7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.034376 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b75fb948d-dzc9p" event={"ID":"e5f3e502-df83-4a0e-8240-53d8d6d78a80","Type":"ContainerStarted","Data":"0299974817a49a23813b6a4e0541439b33a4eb4cdaf556cd7a52e1dc71f6dd3e"} Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.036059 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.061787 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954bdb9b9-dwqsd" event={"ID":"e5862b3f-4b92-4bcc-8d77-4585e53475a8","Type":"ContainerStarted","Data":"3bf863094fac0cc574e88584ad7aef70be00a5b7d815b10d280024c0b502eb20"} Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.061837 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954bdb9b9-dwqsd" event={"ID":"e5862b3f-4b92-4bcc-8d77-4585e53475a8","Type":"ContainerStarted","Data":"c8edab984b8a8fa8f3c645c37f3fc669fda19d91afef94ad0a9af5fe5a6d3584"} Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.109267 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.140529 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-46bmz" event={"ID":"e27f535b-910c-4a13-989f-de13019d4a7d","Type":"ContainerDied","Data":"a5c795edb229084480324d6f84b92da6b528d15de9eac93209b4f8cc88bd6151"} Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.140601 4624 scope.go:117] "RemoveContainer" containerID="709be7226f9cd201ff3b97db5a54a7813f6a5db4cb671a14c1a86640dc231b9a" Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.188670 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b75fb948d-dzc9p" podStartSLOduration=5.188649291 podStartE2EDuration="5.188649291s" podCreationTimestamp="2026-02-28 03:54:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:00.108592849 +0000 UTC m=+1154.772632158" watchObservedRunningTime="2026-02-28 03:55:00.188649291 +0000 UTC m=+1154.852688600" Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.478807 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-46bmz"] Feb 28 03:55:00 crc kubenswrapper[4624]: I0228 03:55:00.488684 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-46bmz"] Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.368507 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" event={"ID":"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0","Type":"ContainerStarted","Data":"638710381bd4a8ec11be2f45faaa4542107b3f946b7568f2a28d0910c2be9e74"} Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.369065 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.379330 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"061053fe-7564-4fbc-8b09-93fea568b77e","Type":"ContainerStarted","Data":"18e8b0adf8dacee2279e5b14dd8b365ee82893628728eec9f75da364da975843"} Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.379540 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-log" containerID="cri-o://326d330c54445bd9e64549f8b30ab483b045f138c92e5c8ca397eb281e7d7876" gracePeriod=30 Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.379905 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-httpd" containerID="cri-o://18e8b0adf8dacee2279e5b14dd8b365ee82893628728eec9f75da364da975843" gracePeriod=30 Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.399353 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" podStartSLOduration=7.399325994 podStartE2EDuration="7.399325994s" podCreationTimestamp="2026-02-28 03:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:01.391602042 +0000 UTC m=+1156.055641351" watchObservedRunningTime="2026-02-28 03:55:01.399325994 +0000 UTC m=+1156.063365293" Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.402151 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-log" containerID="cri-o://e02e5331fd3996f4faf92f59864ecc1139c31b5ccd35ba189782cfb5124b1052" gracePeriod=30 Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.402208 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed303757-67e3-41bc-a252-d37da81b5258","Type":"ContainerStarted","Data":"0b5961340c9da9d47a65664aa91971e3d160639c7da0adb51a4d34bfd494320a"} Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.402307 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-httpd" containerID="cri-o://0b5961340c9da9d47a65664aa91971e3d160639c7da0adb51a4d34bfd494320a" gracePeriod=30 Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.435499 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954bdb9b9-dwqsd" event={"ID":"e5862b3f-4b92-4bcc-8d77-4585e53475a8","Type":"ContainerStarted","Data":"fed657b96509faabeb0b6e18490ac2b7a15cdd03d6bb24cec2741fa935cd65d3"} Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.436778 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.473001 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.47297058 podStartE2EDuration="27.47297058s" podCreationTimestamp="2026-02-28 03:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:01.47152678 +0000 UTC m=+1156.135566089" watchObservedRunningTime="2026-02-28 03:55:01.47297058 +0000 UTC m=+1156.137009889" Feb 28 03:55:01 crc kubenswrapper[4624]: I0228 03:55:01.482957 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.482930072 podStartE2EDuration="26.482930072s" podCreationTimestamp="2026-02-28 03:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:01.433067878 +0000 UTC m=+1156.097107187" watchObservedRunningTime="2026-02-28 03:55:01.482930072 +0000 UTC m=+1156.146969381" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.114006 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27f535b-910c-4a13-989f-de13019d4a7d" path="/var/lib/kubelet/pods/e27f535b-910c-4a13-989f-de13019d4a7d/volumes" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.504625 4624 generic.go:334] "Generic (PLEG): container finished" podID="061053fe-7564-4fbc-8b09-93fea568b77e" containerID="18e8b0adf8dacee2279e5b14dd8b365ee82893628728eec9f75da364da975843" exitCode=143 Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.505037 4624 generic.go:334] "Generic (PLEG): container finished" podID="061053fe-7564-4fbc-8b09-93fea568b77e" containerID="326d330c54445bd9e64549f8b30ab483b045f138c92e5c8ca397eb281e7d7876" exitCode=143 Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.504846 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"061053fe-7564-4fbc-8b09-93fea568b77e","Type":"ContainerDied","Data":"18e8b0adf8dacee2279e5b14dd8b365ee82893628728eec9f75da364da975843"} Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.505259 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"061053fe-7564-4fbc-8b09-93fea568b77e","Type":"ContainerDied","Data":"326d330c54445bd9e64549f8b30ab483b045f138c92e5c8ca397eb281e7d7876"} Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.505298 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"061053fe-7564-4fbc-8b09-93fea568b77e","Type":"ContainerDied","Data":"dd699482354330ef106e3d8e8f78de5abfbce6419e3e948a011ced04336eb6a2"} Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.505310 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd699482354330ef106e3d8e8f78de5abfbce6419e3e948a011ced04336eb6a2" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.507623 4624 generic.go:334] "Generic (PLEG): container finished" podID="ed303757-67e3-41bc-a252-d37da81b5258" containerID="0b5961340c9da9d47a65664aa91971e3d160639c7da0adb51a4d34bfd494320a" exitCode=143 Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.507649 4624 generic.go:334] "Generic (PLEG): container finished" podID="ed303757-67e3-41bc-a252-d37da81b5258" containerID="e02e5331fd3996f4faf92f59864ecc1139c31b5ccd35ba189782cfb5124b1052" exitCode=143 Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.508147 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed303757-67e3-41bc-a252-d37da81b5258","Type":"ContainerDied","Data":"0b5961340c9da9d47a65664aa91971e3d160639c7da0adb51a4d34bfd494320a"} Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.508187 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed303757-67e3-41bc-a252-d37da81b5258","Type":"ContainerDied","Data":"e02e5331fd3996f4faf92f59864ecc1139c31b5ccd35ba189782cfb5124b1052"} Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.609931 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.631887 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.662384 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7954bdb9b9-dwqsd" podStartSLOduration=5.662353359 podStartE2EDuration="5.662353359s" podCreationTimestamp="2026-02-28 03:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:01.522027533 +0000 UTC m=+1156.186066852" watchObservedRunningTime="2026-02-28 03:55:02.662353359 +0000 UTC m=+1157.326392668" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.797760 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-config-data\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.797809 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-scripts\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.797885 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-combined-ca-bundle\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.797936 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2zk\" (UniqueName: \"kubernetes.io/projected/ed303757-67e3-41bc-a252-d37da81b5258-kube-api-access-jw2zk\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.797968 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-httpd-run\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798011 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-logs\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798107 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798133 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg9bs\" (UniqueName: \"kubernetes.io/projected/061053fe-7564-4fbc-8b09-93fea568b77e-kube-api-access-bg9bs\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798157 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-config-data\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798175 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-logs\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798225 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798243 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-httpd-run\") pod \"ed303757-67e3-41bc-a252-d37da81b5258\" (UID: \"ed303757-67e3-41bc-a252-d37da81b5258\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798285 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-scripts\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.798332 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-combined-ca-bundle\") pod \"061053fe-7564-4fbc-8b09-93fea568b77e\" (UID: \"061053fe-7564-4fbc-8b09-93fea568b77e\") " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.799566 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.807228 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-logs" (OuterVolumeSpecName: "logs") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.819000 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.833782 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-logs" (OuterVolumeSpecName: "logs") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.847538 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-scripts" (OuterVolumeSpecName: "scripts") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.848002 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061053fe-7564-4fbc-8b09-93fea568b77e-kube-api-access-bg9bs" (OuterVolumeSpecName: "kube-api-access-bg9bs") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "kube-api-access-bg9bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.847575 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-scripts" (OuterVolumeSpecName: "scripts") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.853827 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed303757-67e3-41bc-a252-d37da81b5258-kube-api-access-jw2zk" (OuterVolumeSpecName: "kube-api-access-jw2zk") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "kube-api-access-jw2zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.890147 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.890276 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901799 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901829 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw2zk\" (UniqueName: \"kubernetes.io/projected/ed303757-67e3-41bc-a252-d37da81b5258-kube-api-access-jw2zk\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901841 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901852 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/061053fe-7564-4fbc-8b09-93fea568b77e-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901881 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901891 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg9bs\" (UniqueName: \"kubernetes.io/projected/061053fe-7564-4fbc-8b09-93fea568b77e-kube-api-access-bg9bs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901899 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901914 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901922 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed303757-67e3-41bc-a252-d37da81b5258-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.901930 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.957708 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.979355 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 28 03:55:02 crc kubenswrapper[4624]: I0228 03:55:02.998804 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.004241 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.004285 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.004295 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.062866 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.098551 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-config-data" (OuterVolumeSpecName: "config-data") pod "061053fe-7564-4fbc-8b09-93fea568b77e" (UID: "061053fe-7564-4fbc-8b09-93fea568b77e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.107011 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/061053fe-7564-4fbc-8b09-93fea568b77e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.107071 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.155337 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-config-data" (OuterVolumeSpecName: "config-data") pod "ed303757-67e3-41bc-a252-d37da81b5258" (UID: "ed303757-67e3-41bc-a252-d37da81b5258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.209343 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed303757-67e3-41bc-a252-d37da81b5258-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.524737 4624 generic.go:334] "Generic (PLEG): container finished" podID="cdc81234-8d71-4da8-821f-62f79823de92" containerID="a09c59dbca07f1f63d63b0fe0b2e5b756fbbbfc13e082ea108ff7d2581f2ec01" exitCode=0 Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.524882 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7zvf6" event={"ID":"cdc81234-8d71-4da8-821f-62f79823de92","Type":"ContainerDied","Data":"a09c59dbca07f1f63d63b0fe0b2e5b756fbbbfc13e082ea108ff7d2581f2ec01"} Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.533648 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed303757-67e3-41bc-a252-d37da81b5258","Type":"ContainerDied","Data":"4e607a3b7069bc86d7aa8b9c5ce5f160bad4f3c69f9c2205d0dcff29d28fadb8"} Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.533712 4624 scope.go:117] "RemoveContainer" containerID="0b5961340c9da9d47a65664aa91971e3d160639c7da0adb51a4d34bfd494320a" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.533887 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.533895 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.630616 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.640142 4624 scope.go:117] "RemoveContainer" containerID="e02e5331fd3996f4faf92f59864ecc1139c31b5ccd35ba189782cfb5124b1052" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.645187 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.656382 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.669062 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.689393 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: E0228 03:55:03.689972 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-log" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.689990 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-log" Feb 28 03:55:03 crc kubenswrapper[4624]: E0228 03:55:03.690027 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-httpd" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690034 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-httpd" Feb 28 03:55:03 crc kubenswrapper[4624]: E0228 03:55:03.690051 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-log" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690060 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-log" Feb 28 03:55:03 crc kubenswrapper[4624]: E0228 03:55:03.690071 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-httpd" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690091 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-httpd" Feb 28 03:55:03 crc kubenswrapper[4624]: E0228 03:55:03.690102 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f535b-910c-4a13-989f-de13019d4a7d" containerName="init" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690108 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f535b-910c-4a13-989f-de13019d4a7d" containerName="init" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690320 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-httpd" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690339 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-log" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690366 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" containerName="glance-httpd" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690376 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27f535b-910c-4a13-989f-de13019d4a7d" containerName="init" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.690390 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed303757-67e3-41bc-a252-d37da81b5258" containerName="glance-log" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.691579 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.698559 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.698900 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.699138 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ptb94" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.699358 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.716134 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.718064 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.728428 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.743122 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.749213 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.765109 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.821997 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.822432 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.822531 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ph5\" (UniqueName: \"kubernetes.io/projected/0e91936f-89cc-4665-b489-773e9c2682f2-kube-api-access-t5ph5\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.822695 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.822805 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.822910 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823021 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823108 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823228 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvth\" (UniqueName: \"kubernetes.io/projected/f2113d6b-377f-426a-9886-d0cd608558b1-kube-api-access-pwvth\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823309 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823401 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823495 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823567 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823648 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823722 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-logs\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.823914 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925641 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925730 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925749 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925771 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwvth\" (UniqueName: \"kubernetes.io/projected/f2113d6b-377f-426a-9886-d0cd608558b1-kube-api-access-pwvth\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925823 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925859 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925875 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925895 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925911 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-logs\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925942 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.925982 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.926006 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.926034 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ph5\" (UniqueName: \"kubernetes.io/projected/0e91936f-89cc-4665-b489-773e9c2682f2-kube-api-access-t5ph5\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.926078 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.926109 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.926179 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.926401 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.933697 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.933699 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.946606 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.946613 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.947218 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.947296 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.947412 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-logs\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.947493 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.951907 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.967833 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwvth\" (UniqueName: \"kubernetes.io/projected/f2113d6b-377f-426a-9886-d0cd608558b1-kube-api-access-pwvth\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.972582 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.974068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ph5\" (UniqueName: \"kubernetes.io/projected/0e91936f-89cc-4665-b489-773e9c2682f2-kube-api-access-t5ph5\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.987780 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:03 crc kubenswrapper[4624]: I0228 03:55:03.997612 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.032379 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " pod="openstack/glance-default-external-api-0" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.038237 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.141156 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061053fe-7564-4fbc-8b09-93fea568b77e" path="/var/lib/kubelet/pods/061053fe-7564-4fbc-8b09-93fea568b77e/volumes" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.152222 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed303757-67e3-41bc-a252-d37da81b5258" path="/var/lib/kubelet/pods/ed303757-67e3-41bc-a252-d37da81b5258/volumes" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.153591 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.153642 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.197955 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.322433 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.328281 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.359124 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.487407 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.592971 4624 generic.go:334] "Generic (PLEG): container finished" podID="12eae8a2-7f1a-447e-afbc-30bc3760f6df" containerID="f5d0ebf42cb55ccfef7a1d5efc023a3d26f7b91a7f65e9a75be2ca942b7f11e2" exitCode=0 Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.593461 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkzf4" event={"ID":"12eae8a2-7f1a-447e-afbc-30bc3760f6df","Type":"ContainerDied","Data":"f5d0ebf42cb55ccfef7a1d5efc023a3d26f7b91a7f65e9a75be2ca942b7f11e2"} Feb 28 03:55:04 crc kubenswrapper[4624]: I0228 03:55:04.878299 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.186918 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.254656 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7zvf6" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.282321 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.367799 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-scripts\") pod \"cdc81234-8d71-4da8-821f-62f79823de92\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.367855 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-config-data\") pod \"cdc81234-8d71-4da8-821f-62f79823de92\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.367907 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5vg\" (UniqueName: \"kubernetes.io/projected/cdc81234-8d71-4da8-821f-62f79823de92-kube-api-access-8f5vg\") pod \"cdc81234-8d71-4da8-821f-62f79823de92\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.367972 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc81234-8d71-4da8-821f-62f79823de92-logs\") pod \"cdc81234-8d71-4da8-821f-62f79823de92\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.368188 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-combined-ca-bundle\") pod \"cdc81234-8d71-4da8-821f-62f79823de92\" (UID: \"cdc81234-8d71-4da8-821f-62f79823de92\") " Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.377708 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdc81234-8d71-4da8-821f-62f79823de92-logs" (OuterVolumeSpecName: "logs") pod "cdc81234-8d71-4da8-821f-62f79823de92" (UID: "cdc81234-8d71-4da8-821f-62f79823de92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.389538 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdc81234-8d71-4da8-821f-62f79823de92-kube-api-access-8f5vg" (OuterVolumeSpecName: "kube-api-access-8f5vg") pod "cdc81234-8d71-4da8-821f-62f79823de92" (UID: "cdc81234-8d71-4da8-821f-62f79823de92"). InnerVolumeSpecName "kube-api-access-8f5vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.422931 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-scripts" (OuterVolumeSpecName: "scripts") pod "cdc81234-8d71-4da8-821f-62f79823de92" (UID: "cdc81234-8d71-4da8-821f-62f79823de92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.423160 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bs8v6"] Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.423482 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="dnsmasq-dns" containerID="cri-o://80deb2970ff82709b20527153c827237964e035a24da3c743cac2918bc221d2d" gracePeriod=10 Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.427407 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.464484 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-config-data" (OuterVolumeSpecName: "config-data") pod "cdc81234-8d71-4da8-821f-62f79823de92" (UID: "cdc81234-8d71-4da8-821f-62f79823de92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.471787 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.471820 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.471837 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f5vg\" (UniqueName: \"kubernetes.io/projected/cdc81234-8d71-4da8-821f-62f79823de92-kube-api-access-8f5vg\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.471856 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc81234-8d71-4da8-821f-62f79823de92-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.485311 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdc81234-8d71-4da8-821f-62f79823de92" (UID: "cdc81234-8d71-4da8-821f-62f79823de92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.575426 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc81234-8d71-4da8-821f-62f79823de92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.671311 4624 generic.go:334] "Generic (PLEG): container finished" podID="df7cc5ba-b521-4349-8306-35d633072cef" containerID="80deb2970ff82709b20527153c827237964e035a24da3c743cac2918bc221d2d" exitCode=0 Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.671421 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" event={"ID":"df7cc5ba-b521-4349-8306-35d633072cef","Type":"ContainerDied","Data":"80deb2970ff82709b20527153c827237964e035a24da3c743cac2918bc221d2d"} Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.708540 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7zvf6" event={"ID":"cdc81234-8d71-4da8-821f-62f79823de92","Type":"ContainerDied","Data":"8f97efadfc4d61c35deb399bedc5abfabba8c9dd826a62159ac090b623306789"} Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.708590 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f97efadfc4d61c35deb399bedc5abfabba8c9dd826a62159ac090b623306789" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.708684 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7zvf6" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.753301 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e91936f-89cc-4665-b489-773e9c2682f2","Type":"ContainerStarted","Data":"83188a0e1314dba0d9829c33020240bef489f3998cb4a65f935c686bc1095a1a"} Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.770905 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-96545fdc6-xmzr4"] Feb 28 03:55:05 crc kubenswrapper[4624]: E0228 03:55:05.771400 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdc81234-8d71-4da8-821f-62f79823de92" containerName="placement-db-sync" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.771420 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdc81234-8d71-4da8-821f-62f79823de92" containerName="placement-db-sync" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.771640 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdc81234-8d71-4da8-821f-62f79823de92" containerName="placement-db-sync" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.781455 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.789935 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2113d6b-377f-426a-9886-d0cd608558b1","Type":"ContainerStarted","Data":"842faf43ecb5cfbf57046317d937dbd59116fb966263c794bc46f822246d5823"} Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.790978 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.791444 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.791504 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.794104 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.794413 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k6ls8" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.795428 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-96545fdc6-xmzr4"] Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.902585 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-public-tls-certs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.902963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-scripts\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.903052 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhvl\" (UniqueName: \"kubernetes.io/projected/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-kube-api-access-fhhvl\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.903162 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-internal-tls-certs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.903243 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-logs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.903380 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-config-data\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:05 crc kubenswrapper[4624]: I0228 03:55:05.903454 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-combined-ca-bundle\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012167 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-public-tls-certs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012249 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-scripts\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012285 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhhvl\" (UniqueName: \"kubernetes.io/projected/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-kube-api-access-fhhvl\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012318 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-internal-tls-certs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012342 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-logs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012379 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-config-data\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.012396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-combined-ca-bundle\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.017790 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-logs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.019921 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-internal-tls-certs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.031769 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-combined-ca-bundle\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.032358 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-public-tls-certs\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.034572 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-config-data\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.056665 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-scripts\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.059651 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhhvl\" (UniqueName: \"kubernetes.io/projected/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-kube-api-access-fhhvl\") pod \"placement-96545fdc6-xmzr4\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.226806 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.812941 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2113d6b-377f-426a-9886-d0cd608558b1","Type":"ContainerStarted","Data":"68b8efad63e5578a813fc70881d06e065546606f2906634fb5d2972b3ed539a8"} Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.818063 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9l75" event={"ID":"34b0ae2c-3cbb-419b-8214-739eea04c9a4","Type":"ContainerStarted","Data":"9b0af6eb06910579d04fbc8d93136e0834c1eb7a82693d13312e6477d27651f1"} Feb 28 03:55:06 crc kubenswrapper[4624]: I0228 03:55:06.855995 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l9l75" podStartSLOduration=3.674572115 podStartE2EDuration="52.855969673s" podCreationTimestamp="2026-02-28 03:54:14 +0000 UTC" firstStartedPulling="2026-02-28 03:54:16.60694426 +0000 UTC m=+1111.270983569" lastFinishedPulling="2026-02-28 03:55:05.788341818 +0000 UTC m=+1160.452381127" observedRunningTime="2026-02-28 03:55:06.84637604 +0000 UTC m=+1161.510415349" watchObservedRunningTime="2026-02-28 03:55:06.855969673 +0000 UTC m=+1161.520008982" Feb 28 03:55:07 crc kubenswrapper[4624]: I0228 03:55:07.838849 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e91936f-89cc-4665-b489-773e9c2682f2","Type":"ContainerStarted","Data":"0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff"} Feb 28 03:55:07 crc kubenswrapper[4624]: I0228 03:55:07.929518 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:55:12 crc kubenswrapper[4624]: I0228 03:55:12.906773 4624 generic.go:334] "Generic (PLEG): container finished" podID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" containerID="9b0af6eb06910579d04fbc8d93136e0834c1eb7a82693d13312e6477d27651f1" exitCode=0 Feb 28 03:55:12 crc kubenswrapper[4624]: I0228 03:55:12.906846 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9l75" event={"ID":"34b0ae2c-3cbb-419b-8214-739eea04c9a4","Type":"ContainerDied","Data":"9b0af6eb06910579d04fbc8d93136e0834c1eb7a82693d13312e6477d27651f1"} Feb 28 03:55:14 crc kubenswrapper[4624]: I0228 03:55:14.156289 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:55:14 crc kubenswrapper[4624]: I0228 03:55:14.324068 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:14.996311 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.003878 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.004075 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkzf4" event={"ID":"12eae8a2-7f1a-447e-afbc-30bc3760f6df","Type":"ContainerDied","Data":"e046d7f13184d20e0bde2769363dfc7aaff8a8b882b1f53d10e89791430ac961"} Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.004124 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e046d7f13184d20e0bde2769363dfc7aaff8a8b882b1f53d10e89791430ac961" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.015610 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l9l75" event={"ID":"34b0ae2c-3cbb-419b-8214-739eea04c9a4","Type":"ContainerDied","Data":"9cd6e91d9ee0a9ca2db41d7dc8daaafd7a07c95b54e085bcc5cecb773ca36562"} Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.015643 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd6e91d9ee0a9ca2db41d7dc8daaafd7a07c95b54e085bcc5cecb773ca36562" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.092995 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-credential-keys\") pod \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.093036 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-svc\") pod \"df7cc5ba-b521-4349-8306-35d633072cef\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102542 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-config-data\") pod \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102616 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-fernet-keys\") pod \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102647 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-scripts\") pod \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102684 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-nb\") pod \"df7cc5ba-b521-4349-8306-35d633072cef\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102731 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-sb\") pod \"df7cc5ba-b521-4349-8306-35d633072cef\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102764 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-swift-storage-0\") pod \"df7cc5ba-b521-4349-8306-35d633072cef\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102788 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-combined-ca-bundle\") pod \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102830 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjh9v\" (UniqueName: \"kubernetes.io/projected/12eae8a2-7f1a-447e-afbc-30bc3760f6df-kube-api-access-rjh9v\") pod \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\" (UID: \"12eae8a2-7f1a-447e-afbc-30bc3760f6df\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102901 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-config\") pod \"df7cc5ba-b521-4349-8306-35d633072cef\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.102918 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltl48\" (UniqueName: \"kubernetes.io/projected/df7cc5ba-b521-4349-8306-35d633072cef-kube-api-access-ltl48\") pod \"df7cc5ba-b521-4349-8306-35d633072cef\" (UID: \"df7cc5ba-b521-4349-8306-35d633072cef\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.131848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" event={"ID":"df7cc5ba-b521-4349-8306-35d633072cef","Type":"ContainerDied","Data":"1bf6813312b05e1c3b09708625f9dd9435374cd8dd38ec2b1f7f96cc67660596"} Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.131935 4624 scope.go:117] "RemoveContainer" containerID="80deb2970ff82709b20527153c827237964e035a24da3c743cac2918bc221d2d" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.175564 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.186582 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "12eae8a2-7f1a-447e-afbc-30bc3760f6df" (UID: "12eae8a2-7f1a-447e-afbc-30bc3760f6df"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.186700 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-scripts" (OuterVolumeSpecName: "scripts") pod "12eae8a2-7f1a-447e-afbc-30bc3760f6df" (UID: "12eae8a2-7f1a-447e-afbc-30bc3760f6df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.205946 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9l75" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.216766 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "12eae8a2-7f1a-447e-afbc-30bc3760f6df" (UID: "12eae8a2-7f1a-447e-afbc-30bc3760f6df"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.218447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7cc5ba-b521-4349-8306-35d633072cef-kube-api-access-ltl48" (OuterVolumeSpecName: "kube-api-access-ltl48") pod "df7cc5ba-b521-4349-8306-35d633072cef" (UID: "df7cc5ba-b521-4349-8306-35d633072cef"). InnerVolumeSpecName "kube-api-access-ltl48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.219877 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltl48\" (UniqueName: \"kubernetes.io/projected/df7cc5ba-b521-4349-8306-35d633072cef-kube-api-access-ltl48\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.219925 4624 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.219939 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.219949 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.237903 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12eae8a2-7f1a-447e-afbc-30bc3760f6df-kube-api-access-rjh9v" (OuterVolumeSpecName: "kube-api-access-rjh9v") pod "12eae8a2-7f1a-447e-afbc-30bc3760f6df" (UID: "12eae8a2-7f1a-447e-afbc-30bc3760f6df"). InnerVolumeSpecName "kube-api-access-rjh9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.254316 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12eae8a2-7f1a-447e-afbc-30bc3760f6df" (UID: "12eae8a2-7f1a-447e-afbc-30bc3760f6df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.275861 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-config-data" (OuterVolumeSpecName: "config-data") pod "12eae8a2-7f1a-447e-afbc-30bc3760f6df" (UID: "12eae8a2-7f1a-447e-afbc-30bc3760f6df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.276826 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-config" (OuterVolumeSpecName: "config") pod "df7cc5ba-b521-4349-8306-35d633072cef" (UID: "df7cc5ba-b521-4349-8306-35d633072cef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.315822 4624 scope.go:117] "RemoveContainer" containerID="6664ee977520eea8a8bc1362c2e38e1d2af422afada72900b1ed649773cdbc9f" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.321063 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lkzm\" (UniqueName: \"kubernetes.io/projected/34b0ae2c-3cbb-419b-8214-739eea04c9a4-kube-api-access-4lkzm\") pod \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.322805 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-combined-ca-bundle\") pod \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.323047 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-db-sync-config-data\") pod \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\" (UID: \"34b0ae2c-3cbb-419b-8214-739eea04c9a4\") " Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.324522 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.337852 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12eae8a2-7f1a-447e-afbc-30bc3760f6df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.337970 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjh9v\" (UniqueName: \"kubernetes.io/projected/12eae8a2-7f1a-447e-afbc-30bc3760f6df-kube-api-access-rjh9v\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.338047 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.345426 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b0ae2c-3cbb-419b-8214-739eea04c9a4-kube-api-access-4lkzm" (OuterVolumeSpecName: "kube-api-access-4lkzm") pod "34b0ae2c-3cbb-419b-8214-739eea04c9a4" (UID: "34b0ae2c-3cbb-419b-8214-739eea04c9a4"). InnerVolumeSpecName "kube-api-access-4lkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.355408 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df7cc5ba-b521-4349-8306-35d633072cef" (UID: "df7cc5ba-b521-4349-8306-35d633072cef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.361534 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-bs8v6" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.368633 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "34b0ae2c-3cbb-419b-8214-739eea04c9a4" (UID: "34b0ae2c-3cbb-419b-8214-739eea04c9a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.375632 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df7cc5ba-b521-4349-8306-35d633072cef" (UID: "df7cc5ba-b521-4349-8306-35d633072cef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.440315 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lkzm\" (UniqueName: \"kubernetes.io/projected/34b0ae2c-3cbb-419b-8214-739eea04c9a4-kube-api-access-4lkzm\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.440359 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.440372 4624 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.440385 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.471556 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df7cc5ba-b521-4349-8306-35d633072cef" (UID: "df7cc5ba-b521-4349-8306-35d633072cef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.511029 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34b0ae2c-3cbb-419b-8214-739eea04c9a4" (UID: "34b0ae2c-3cbb-419b-8214-739eea04c9a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.525211 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df7cc5ba-b521-4349-8306-35d633072cef" (UID: "df7cc5ba-b521-4349-8306-35d633072cef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.534637 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-96545fdc6-xmzr4"] Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.543940 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.543993 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7cc5ba-b521-4349-8306-35d633072cef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:15 crc kubenswrapper[4624]: I0228 03:55:15.544009 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b0ae2c-3cbb-419b-8214-739eea04c9a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.219757 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96545fdc6-xmzr4" event={"ID":"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc","Type":"ContainerStarted","Data":"30e10c2767bb96c30db9517e6e114dd489191635f32feff0ac7267f0000c9221"} Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.238365 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkzf4" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.240735 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l9l75" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.267852 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bs8v6"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.394257 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f48865754-rdngs"] Feb 28 03:55:16 crc kubenswrapper[4624]: E0228 03:55:16.408811 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="init" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.408866 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="init" Feb 28 03:55:16 crc kubenswrapper[4624]: E0228 03:55:16.408900 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="dnsmasq-dns" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.408908 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="dnsmasq-dns" Feb 28 03:55:16 crc kubenswrapper[4624]: E0228 03:55:16.408924 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" containerName="barbican-db-sync" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.408932 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" containerName="barbican-db-sync" Feb 28 03:55:16 crc kubenswrapper[4624]: E0228 03:55:16.408973 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eae8a2-7f1a-447e-afbc-30bc3760f6df" containerName="keystone-bootstrap" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.408981 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eae8a2-7f1a-447e-afbc-30bc3760f6df" containerName="keystone-bootstrap" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.409602 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" containerName="barbican-db-sync" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.409662 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eae8a2-7f1a-447e-afbc-30bc3760f6df" containerName="keystone-bootstrap" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.409683 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7cc5ba-b521-4349-8306-35d633072cef" containerName="dnsmasq-dns" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.410698 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.417050 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.417293 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.417569 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.418193 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.419681 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.419907 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bkrfd" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.515918 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-bs8v6"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523381 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-fernet-keys\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523486 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-credential-keys\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523614 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-public-tls-certs\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523735 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqkz\" (UniqueName: \"kubernetes.io/projected/3eeb3ef4-037f-4755-a2d3-46df6804b116-kube-api-access-cmqkz\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523782 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-combined-ca-bundle\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523824 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-internal-tls-certs\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.523919 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-scripts\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.524042 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-config-data\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.614590 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f48865754-rdngs"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.634743 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-fernet-keys\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.638823 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-credential-keys\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.639041 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-public-tls-certs\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.639217 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqkz\" (UniqueName: \"kubernetes.io/projected/3eeb3ef4-037f-4755-a2d3-46df6804b116-kube-api-access-cmqkz\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.639303 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-combined-ca-bundle\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.639387 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-internal-tls-certs\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.639516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-scripts\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.649714 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-config-data\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.676857 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-credential-keys\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.676974 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-public-tls-certs\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.677485 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-fernet-keys\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.682332 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-75c696b849-456ll"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.684023 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.694804 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-internal-tls-certs\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.695355 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-config-data\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.705899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-combined-ca-bundle\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.712886 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqkz\" (UniqueName: \"kubernetes.io/projected/3eeb3ef4-037f-4755-a2d3-46df6804b116-kube-api-access-cmqkz\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.713119 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.713156 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.713327 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-29fpl" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.722243 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75c696b849-456ll"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.725634 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eeb3ef4-037f-4755-a2d3-46df6804b116-scripts\") pod \"keystone-6f48865754-rdngs\" (UID: \"3eeb3ef4-037f-4755-a2d3-46df6804b116\") " pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.752004 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.752661 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data-custom\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.752708 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fzm\" (UniqueName: \"kubernetes.io/projected/c42c908a-802d-416b-a7de-066df6e008bd-kube-api-access-q4fzm\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.752770 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-combined-ca-bundle\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.752805 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42c908a-802d-416b-a7de-066df6e008bd-logs\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.765364 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-f7cb456bd-7zslx"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.766915 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.779492 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.827599 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f7cb456bd-7zslx"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.835953 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856432 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxsrh\" (UniqueName: \"kubernetes.io/projected/32836bdc-f650-4a3a-b1f9-21de1a2992e3-kube-api-access-lxsrh\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856507 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fzm\" (UniqueName: \"kubernetes.io/projected/c42c908a-802d-416b-a7de-066df6e008bd-kube-api-access-q4fzm\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856544 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-combined-ca-bundle\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856586 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32836bdc-f650-4a3a-b1f9-21de1a2992e3-logs\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856616 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-combined-ca-bundle\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856648 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data-custom\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856665 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856685 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42c908a-802d-416b-a7de-066df6e008bd-logs\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856865 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.856912 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data-custom\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.857441 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42c908a-802d-416b-a7de-066df6e008bd-logs\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.868327 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-combined-ca-bundle\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.878469 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data-custom\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.878978 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.921347 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-t94lj"] Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.923102 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.959854 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fzm\" (UniqueName: \"kubernetes.io/projected/c42c908a-802d-416b-a7de-066df6e008bd-kube-api-access-q4fzm\") pod \"barbican-worker-75c696b849-456ll\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960583 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-combined-ca-bundle\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960645 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960692 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32836bdc-f650-4a3a-b1f9-21de1a2992e3-logs\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960723 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960764 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data-custom\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960786 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960810 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-svc\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960854 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960891 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckgm\" (UniqueName: \"kubernetes.io/projected/030a5f79-331e-4d94-98e9-67ebca169648-kube-api-access-wckgm\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960915 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-config\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.960949 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxsrh\" (UniqueName: \"kubernetes.io/projected/32836bdc-f650-4a3a-b1f9-21de1a2992e3-kube-api-access-lxsrh\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:16 crc kubenswrapper[4624]: I0228 03:55:16.978511 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32836bdc-f650-4a3a-b1f9-21de1a2992e3-logs\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.002328 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.004548 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.005939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data-custom\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.027174 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-combined-ca-bundle\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.048316 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-t94lj"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.064793 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-svc\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.067362 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.086580 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckgm\" (UniqueName: \"kubernetes.io/projected/030a5f79-331e-4d94-98e9-67ebca169648-kube-api-access-wckgm\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.092160 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-config\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.092613 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.092805 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.093354 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-config\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.068527 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.066208 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-svc\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.094967 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.096848 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.104307 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxsrh\" (UniqueName: \"kubernetes.io/projected/32836bdc-f650-4a3a-b1f9-21de1a2992e3-kube-api-access-lxsrh\") pod \"barbican-keystone-listener-f7cb456bd-7zslx\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.154249 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckgm\" (UniqueName: \"kubernetes.io/projected/030a5f79-331e-4d94-98e9-67ebca169648-kube-api-access-wckgm\") pod \"dnsmasq-dns-688c87cc99-t94lj\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.183256 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c4588546c-gkrmm"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.191913 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.232285 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6bc94fcbd6-4dndd"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.234107 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.243180 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c4588546c-gkrmm"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.255791 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bc94fcbd6-4dndd"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.287434 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96545fdc6-xmzr4" event={"ID":"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc","Type":"ContainerStarted","Data":"24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1"} Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297363 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpxp\" (UniqueName: \"kubernetes.io/projected/3189b6cc-a911-48f2-aff9-f41b3313d38a-kube-api-access-qtpxp\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297426 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3189b6cc-a911-48f2-aff9-f41b3313d38a-logs\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297454 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb7219-e74f-4adf-bf31-31794a503f07-logs\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297483 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-config-data-custom\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297522 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-config-data\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297548 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-combined-ca-bundle\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297566 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-config-data\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297591 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-combined-ca-bundle\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297608 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxrh\" (UniqueName: \"kubernetes.io/projected/5fbb7219-e74f-4adf-bf31-31794a503f07-kube-api-access-smxrh\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.297677 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-config-data-custom\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.309341 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e91936f-89cc-4665-b489-773e9c2682f2","Type":"ContainerStarted","Data":"7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf"} Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.363613 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87","Type":"ContainerStarted","Data":"c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a"} Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.373195 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.37316845 podStartE2EDuration="14.37316845s" podCreationTimestamp="2026-02-28 03:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:17.354659987 +0000 UTC m=+1172.018699286" watchObservedRunningTime="2026-02-28 03:55:17.37316845 +0000 UTC m=+1172.037207759" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.387467 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401532 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-config-data\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401603 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-combined-ca-bundle\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401625 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-config-data\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401653 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-combined-ca-bundle\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401671 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxrh\" (UniqueName: \"kubernetes.io/projected/5fbb7219-e74f-4adf-bf31-31794a503f07-kube-api-access-smxrh\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401738 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-config-data-custom\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpxp\" (UniqueName: \"kubernetes.io/projected/3189b6cc-a911-48f2-aff9-f41b3313d38a-kube-api-access-qtpxp\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401823 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3189b6cc-a911-48f2-aff9-f41b3313d38a-logs\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401841 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb7219-e74f-4adf-bf31-31794a503f07-logs\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.401870 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-config-data-custom\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.410774 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fbb7219-e74f-4adf-bf31-31794a503f07-logs\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.410774 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3189b6cc-a911-48f2-aff9-f41b3313d38a-logs\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.419761 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-config-data-custom\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.423620 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-config-data-custom\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.424165 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-combined-ca-bundle\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.424704 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbb7219-e74f-4adf-bf31-31794a503f07-config-data\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.424705 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-config-data\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.433068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3189b6cc-a911-48f2-aff9-f41b3313d38a-combined-ca-bundle\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.467572 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpxp\" (UniqueName: \"kubernetes.io/projected/3189b6cc-a911-48f2-aff9-f41b3313d38a-kube-api-access-qtpxp\") pod \"barbican-worker-6c4588546c-gkrmm\" (UID: \"3189b6cc-a911-48f2-aff9-f41b3313d38a\") " pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.469637 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.486280 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxrh\" (UniqueName: \"kubernetes.io/projected/5fbb7219-e74f-4adf-bf31-31794a503f07-kube-api-access-smxrh\") pod \"barbican-keystone-listener-6bc94fcbd6-4dndd\" (UID: \"5fbb7219-e74f-4adf-bf31-31794a503f07\") " pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.504618 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b6cbb6d88-bsx2h"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.514820 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.521340 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b6cbb6d88-bsx2h"] Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.533069 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.559499 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c4588546c-gkrmm" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.587072 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.711363 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.711421 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data-custom\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.711445 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-combined-ca-bundle\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.711529 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae632f24-f74a-413a-9835-599c21020eb5-logs\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.711615 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwjz\" (UniqueName: \"kubernetes.io/projected/ae632f24-f74a-413a-9835-599c21020eb5-kube-api-access-nqwjz\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.836107 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data-custom\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.836166 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-combined-ca-bundle\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.836373 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae632f24-f74a-413a-9835-599c21020eb5-logs\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.836607 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwjz\" (UniqueName: \"kubernetes.io/projected/ae632f24-f74a-413a-9835-599c21020eb5-kube-api-access-nqwjz\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.836687 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.841634 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae632f24-f74a-413a-9835-599c21020eb5-logs\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.845893 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:17 crc kubenswrapper[4624]: I0228 03:55:17.942200 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-combined-ca-bundle\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:17.999325 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwjz\" (UniqueName: \"kubernetes.io/projected/ae632f24-f74a-413a-9835-599c21020eb5-kube-api-access-nqwjz\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.022168 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data-custom\") pod \"barbican-api-5b6cbb6d88-bsx2h\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.188997 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7cc5ba-b521-4349-8306-35d633072cef" path="/var/lib/kubelet/pods/df7cc5ba-b521-4349-8306-35d633072cef/volumes" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.223363 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.243852 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75c696b849-456ll"] Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.384854 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f48865754-rdngs"] Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.487822 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2113d6b-377f-426a-9886-d0cd608558b1","Type":"ContainerStarted","Data":"9170f7af524ba32be9556b97fb7fe1997d75397cc1cab90b26745109e85b852f"} Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.515682 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8tlzl" event={"ID":"0d169c37-cd26-4e66-8f96-d0a53a96d616","Type":"ContainerStarted","Data":"32dbac6a5ac9e1eb0c01d8030240c36e5ebd1d139f4bc5b4feb71e4a28f1bc46"} Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.566145 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.566123371 podStartE2EDuration="15.566123371s" podCreationTimestamp="2026-02-28 03:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:18.554073535 +0000 UTC m=+1173.218112844" watchObservedRunningTime="2026-02-28 03:55:18.566123371 +0000 UTC m=+1173.230162680" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.618918 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8tlzl" podStartSLOduration=6.164861513 podStartE2EDuration="1m4.618886184s" podCreationTimestamp="2026-02-28 03:54:14 +0000 UTC" firstStartedPulling="2026-02-28 03:54:16.593560344 +0000 UTC m=+1111.257599653" lastFinishedPulling="2026-02-28 03:55:15.047585025 +0000 UTC m=+1169.711624324" observedRunningTime="2026-02-28 03:55:18.597635147 +0000 UTC m=+1173.261674456" watchObservedRunningTime="2026-02-28 03:55:18.618886184 +0000 UTC m=+1173.282925503" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.621315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96545fdc6-xmzr4" event={"ID":"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc","Type":"ContainerStarted","Data":"7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0"} Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.622110 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.622178 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.637509 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75c696b849-456ll" event={"ID":"c42c908a-802d-416b-a7de-066df6e008bd","Type":"ContainerStarted","Data":"9c93ce3ca8046cf05b88d534c322db9695835608ff666e0e07af9539a8919bc9"} Feb 28 03:55:18 crc kubenswrapper[4624]: I0228 03:55:18.683512 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-96545fdc6-xmzr4" podStartSLOduration=13.683487008 podStartE2EDuration="13.683487008s" podCreationTimestamp="2026-02-28 03:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:18.665031047 +0000 UTC m=+1173.329070356" watchObservedRunningTime="2026-02-28 03:55:18.683487008 +0000 UTC m=+1173.347526317" Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.259197 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c4588546c-gkrmm"] Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.277916 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f7cb456bd-7zslx"] Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.312269 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-t94lj"] Feb 28 03:55:19 crc kubenswrapper[4624]: W0228 03:55:19.336858 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3189b6cc_a911_48f2_aff9_f41b3313d38a.slice/crio-7f515764cf58fddfc16ba4eb7f1cf1a5971ad2350f58fff40030948caab45787 WatchSource:0}: Error finding container 7f515764cf58fddfc16ba4eb7f1cf1a5971ad2350f58fff40030948caab45787: Status 404 returned error can't find the container with id 7f515764cf58fddfc16ba4eb7f1cf1a5971ad2350f58fff40030948caab45787 Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.523912 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b6cbb6d88-bsx2h"] Feb 28 03:55:19 crc kubenswrapper[4624]: W0228 03:55:19.536905 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae632f24_f74a_413a_9835_599c21020eb5.slice/crio-7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3 WatchSource:0}: Error finding container 7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3: Status 404 returned error can't find the container with id 7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3 Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.539790 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.539868 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.580448 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bc94fcbd6-4dndd"] Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.693454 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" event={"ID":"5fbb7219-e74f-4adf-bf31-31794a503f07","Type":"ContainerStarted","Data":"3e940d784fc0157f47123db89a44f299cfdff6950486bb7b88c5915492030673"} Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.718553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c4588546c-gkrmm" event={"ID":"3189b6cc-a911-48f2-aff9-f41b3313d38a","Type":"ContainerStarted","Data":"7f515764cf58fddfc16ba4eb7f1cf1a5971ad2350f58fff40030948caab45787"} Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.730298 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" event={"ID":"ae632f24-f74a-413a-9835-599c21020eb5","Type":"ContainerStarted","Data":"7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3"} Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.744437 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f48865754-rdngs" event={"ID":"3eeb3ef4-037f-4755-a2d3-46df6804b116","Type":"ContainerStarted","Data":"33c3144eabee8e578a8fe549a7ccf4342cb70f2c271076abe716dc69dfdd31e4"} Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.768181 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" event={"ID":"32836bdc-f650-4a3a-b1f9-21de1a2992e3","Type":"ContainerStarted","Data":"b7ba5564aa1c99b564d1863d34fdec3dc70b4d1b2faad3c8cd4c226050a2a62f"} Feb 28 03:55:19 crc kubenswrapper[4624]: I0228 03:55:19.785419 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" event={"ID":"030a5f79-331e-4d94-98e9-67ebca169648","Type":"ContainerStarted","Data":"46d628c29cf79d07f54bf2a8ddf75b40dc6fda0600f62934492f458ccc08a464"} Feb 28 03:55:20 crc kubenswrapper[4624]: I0228 03:55:20.825313 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" event={"ID":"ae632f24-f74a-413a-9835-599c21020eb5","Type":"ContainerStarted","Data":"221a2438ea7f8034dd34b857c660b9c4e5cd5735c4d43f7a3120fcba1166e62b"} Feb 28 03:55:20 crc kubenswrapper[4624]: I0228 03:55:20.844439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f48865754-rdngs" event={"ID":"3eeb3ef4-037f-4755-a2d3-46df6804b116","Type":"ContainerStarted","Data":"cd0c71a63639ec3590b719951a85b3f399d98e5c92513a3045e4215f55113eb2"} Feb 28 03:55:20 crc kubenswrapper[4624]: I0228 03:55:20.844498 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:20 crc kubenswrapper[4624]: I0228 03:55:20.880961 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f48865754-rdngs" podStartSLOduration=4.880940346 podStartE2EDuration="4.880940346s" podCreationTimestamp="2026-02-28 03:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:20.879047544 +0000 UTC m=+1175.543086853" watchObservedRunningTime="2026-02-28 03:55:20.880940346 +0000 UTC m=+1175.544979655" Feb 28 03:55:20 crc kubenswrapper[4624]: I0228 03:55:20.882405 4624 generic.go:334] "Generic (PLEG): container finished" podID="030a5f79-331e-4d94-98e9-67ebca169648" containerID="5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f" exitCode=0 Feb 28 03:55:20 crc kubenswrapper[4624]: I0228 03:55:20.882560 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" event={"ID":"030a5f79-331e-4d94-98e9-67ebca169648","Type":"ContainerDied","Data":"5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f"} Feb 28 03:55:21 crc kubenswrapper[4624]: I0228 03:55:21.914963 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" event={"ID":"ae632f24-f74a-413a-9835-599c21020eb5","Type":"ContainerStarted","Data":"2088cea6906d1eaac4a8ed04afc2296f10c98a6e248003c7a5cb2ef2ab1af75b"} Feb 28 03:55:21 crc kubenswrapper[4624]: I0228 03:55:21.916177 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:21 crc kubenswrapper[4624]: I0228 03:55:21.969311 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podStartSLOduration=4.969292637 podStartE2EDuration="4.969292637s" podCreationTimestamp="2026-02-28 03:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:21.964282391 +0000 UTC m=+1176.628321710" watchObservedRunningTime="2026-02-28 03:55:21.969292637 +0000 UTC m=+1176.633331946" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.103873 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-697469cdb8-v44r2"] Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.105714 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.111296 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.111543 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.114195 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-697469cdb8-v44r2"] Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265450 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqrs\" (UniqueName: \"kubernetes.io/projected/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-kube-api-access-5tqrs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265522 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-config-data-custom\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265571 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-public-tls-certs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265611 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-config-data\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265635 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-logs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265667 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-combined-ca-bundle\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.265689 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-internal-tls-certs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.367858 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqrs\" (UniqueName: \"kubernetes.io/projected/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-kube-api-access-5tqrs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.367929 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-config-data-custom\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.367997 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-public-tls-certs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.368048 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-config-data\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.368131 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-logs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.368178 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-internal-tls-certs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.368205 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-combined-ca-bundle\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.368926 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-logs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.376762 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-public-tls-certs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.377256 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-config-data-custom\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.378845 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-combined-ca-bundle\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.380871 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-internal-tls-certs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.381638 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-config-data\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.396382 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqrs\" (UniqueName: \"kubernetes.io/projected/413c221c-acb0-4f2d-9621-b5bd0cdc14a5-kube-api-access-5tqrs\") pod \"barbican-api-697469cdb8-v44r2\" (UID: \"413c221c-acb0-4f2d-9621-b5bd0cdc14a5\") " pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.452518 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:22 crc kubenswrapper[4624]: I0228 03:55:22.930069 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.039444 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.039872 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.132351 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.153138 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.324357 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.333727 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.372143 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.372203 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.527438 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.646028 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.892447 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-697469cdb8-v44r2"] Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.985584 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c4588546c-gkrmm" event={"ID":"3189b6cc-a911-48f2-aff9-f41b3313d38a","Type":"ContainerStarted","Data":"5bbe46154a1537559ddd6856e155fbf4f5b426e0ed4da296050f21c4641f2f9b"} Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.986603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75c696b849-456ll" event={"ID":"c42c908a-802d-416b-a7de-066df6e008bd","Type":"ContainerStarted","Data":"d2ca66697571e5f8206e57c11167ef607f623a8688e20234c1cbcba16ceb5a32"} Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.987406 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-697469cdb8-v44r2" event={"ID":"413c221c-acb0-4f2d-9621-b5bd0cdc14a5","Type":"ContainerStarted","Data":"ab227c96a763d91dcfbfd23aee3bee80dd6450991fa54b43dc3777fe6c8011ec"} Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.994302 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" event={"ID":"030a5f79-331e-4d94-98e9-67ebca169648","Type":"ContainerStarted","Data":"af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75"} Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.994332 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.995067 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.995104 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.995114 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:24 crc kubenswrapper[4624]: I0228 03:55:24.995125 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 03:55:25 crc kubenswrapper[4624]: I0228 03:55:25.040186 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" podStartSLOduration=9.040076207 podStartE2EDuration="9.040076207s" podCreationTimestamp="2026-02-28 03:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:25.036797398 +0000 UTC m=+1179.700836707" watchObservedRunningTime="2026-02-28 03:55:25.040076207 +0000 UTC m=+1179.704115516" Feb 28 03:55:25 crc kubenswrapper[4624]: I0228 03:55:25.647139 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:55:25 crc kubenswrapper[4624]: I0228 03:55:25.957263 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7954bdb9b9-dwqsd"] Feb 28 03:55:25 crc kubenswrapper[4624]: I0228 03:55:25.957595 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7954bdb9b9-dwqsd" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-api" containerID="cri-o://3bf863094fac0cc574e88584ad7aef70be00a5b7d815b10d280024c0b502eb20" gracePeriod=30 Feb 28 03:55:25 crc kubenswrapper[4624]: I0228 03:55:25.958504 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7954bdb9b9-dwqsd" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-httpd" containerID="cri-o://fed657b96509faabeb0b6e18490ac2b7a15cdd03d6bb24cec2741fa935cd65d3" gracePeriod=30 Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.024845 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7954bdb9b9-dwqsd" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": EOF" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.035331 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" event={"ID":"5fbb7219-e74f-4adf-bf31-31794a503f07","Type":"ContainerStarted","Data":"b80f14a98e31ed8cf4c6bcac4de6a843c4f0c0944d098e8834e8c4613629b96c"} Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.052128 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c4588546c-gkrmm" event={"ID":"3189b6cc-a911-48f2-aff9-f41b3313d38a","Type":"ContainerStarted","Data":"6aadaa6babda5ab1370649fad498e9f9fbee4df85fe22a21b2793376a017abf6"} Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.057391 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b4965c79c-gh5mv"] Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.059635 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.072383 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75c696b849-456ll" event={"ID":"c42c908a-802d-416b-a7de-066df6e008bd","Type":"ContainerStarted","Data":"8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808"} Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.095295 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-public-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.095659 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-combined-ca-bundle\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.095778 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-httpd-config\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.096139 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2769\" (UniqueName: \"kubernetes.io/projected/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-kube-api-access-p2769\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.098908 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-config\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.099027 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-ovndb-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.099336 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-internal-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.125770 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-697469cdb8-v44r2" event={"ID":"413c221c-acb0-4f2d-9621-b5bd0cdc14a5","Type":"ContainerStarted","Data":"b18c363df3011f69e86d266552543391d4be93567fb189eb822106e3e2bbe4c0"} Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.125813 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b4965c79c-gh5mv"] Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.133327 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" event={"ID":"32836bdc-f650-4a3a-b1f9-21de1a2992e3","Type":"ContainerStarted","Data":"67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204"} Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.133384 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" event={"ID":"32836bdc-f650-4a3a-b1f9-21de1a2992e3","Type":"ContainerStarted","Data":"8c63bf156015f7c145e8e225f906f2ed92cbe5d65c471bb91f916daccf77b6eb"} Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.140517 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c4588546c-gkrmm" podStartSLOduration=4.438625766 podStartE2EDuration="9.140493156s" podCreationTimestamp="2026-02-28 03:55:17 +0000 UTC" firstStartedPulling="2026-02-28 03:55:19.423242124 +0000 UTC m=+1174.087281433" lastFinishedPulling="2026-02-28 03:55:24.125109514 +0000 UTC m=+1178.789148823" observedRunningTime="2026-02-28 03:55:26.089267595 +0000 UTC m=+1180.753306904" watchObservedRunningTime="2026-02-28 03:55:26.140493156 +0000 UTC m=+1180.804532465" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.192965 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-75c696b849-456ll"] Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201290 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-public-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201388 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-combined-ca-bundle\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201427 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-httpd-config\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201467 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2769\" (UniqueName: \"kubernetes.io/projected/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-kube-api-access-p2769\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201493 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-config\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-ovndb-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.201652 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-internal-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.229866 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-75c696b849-456ll" podStartSLOduration=4.374804322 podStartE2EDuration="10.229836772s" podCreationTimestamp="2026-02-28 03:55:16 +0000 UTC" firstStartedPulling="2026-02-28 03:55:18.270881885 +0000 UTC m=+1172.934921194" lastFinishedPulling="2026-02-28 03:55:24.125914335 +0000 UTC m=+1178.789953644" observedRunningTime="2026-02-28 03:55:26.181832338 +0000 UTC m=+1180.845871647" watchObservedRunningTime="2026-02-28 03:55:26.229836772 +0000 UTC m=+1180.893876171" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.250162 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2769\" (UniqueName: \"kubernetes.io/projected/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-kube-api-access-p2769\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.264310 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-internal-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.264822 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-public-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.267298 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-ovndb-tls-certs\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.267731 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-combined-ca-bundle\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.268998 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-httpd-config\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.273762 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6aa9707-50ce-40f7-a741-9dcfea4b1f8e-config\") pod \"neutron-b4965c79c-gh5mv\" (UID: \"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e\") " pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.441636 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:26 crc kubenswrapper[4624]: I0228 03:55:26.550152 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" podStartSLOduration=5.774642291 podStartE2EDuration="10.550123149s" podCreationTimestamp="2026-02-28 03:55:16 +0000 UTC" firstStartedPulling="2026-02-28 03:55:19.350349975 +0000 UTC m=+1174.014389274" lastFinishedPulling="2026-02-28 03:55:24.125830823 +0000 UTC m=+1178.789870132" observedRunningTime="2026-02-28 03:55:26.413846458 +0000 UTC m=+1181.077885767" watchObservedRunningTime="2026-02-28 03:55:26.550123149 +0000 UTC m=+1181.214162458" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.156242 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" event={"ID":"5fbb7219-e74f-4adf-bf31-31794a503f07","Type":"ContainerStarted","Data":"10acae980299e9f46ed7a506bb971d4ecaa772c13411cd66b769cda3f7e6502b"} Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.171469 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.171497 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.172649 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.172662 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.172744 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-697469cdb8-v44r2" event={"ID":"413c221c-acb0-4f2d-9621-b5bd0cdc14a5","Type":"ContainerStarted","Data":"f2ce01caf4d0ebb327c13e7ed0d4275016f7235e9c1ac99e185c5977862044f9"} Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.173363 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.173381 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.197156 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6bc94fcbd6-4dndd" podStartSLOduration=5.69084424 podStartE2EDuration="10.197135197s" podCreationTimestamp="2026-02-28 03:55:17 +0000 UTC" firstStartedPulling="2026-02-28 03:55:19.633439643 +0000 UTC m=+1174.297478952" lastFinishedPulling="2026-02-28 03:55:24.1397306 +0000 UTC m=+1178.803769909" observedRunningTime="2026-02-28 03:55:27.184063151 +0000 UTC m=+1181.848102460" watchObservedRunningTime="2026-02-28 03:55:27.197135197 +0000 UTC m=+1181.861174506" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.229305 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-f7cb456bd-7zslx"] Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.264834 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-697469cdb8-v44r2" podStartSLOduration=5.264807404 podStartE2EDuration="5.264807404s" podCreationTimestamp="2026-02-28 03:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:27.232669632 +0000 UTC m=+1181.896708941" watchObservedRunningTime="2026-02-28 03:55:27.264807404 +0000 UTC m=+1181.928846723" Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.398248 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b4965c79c-gh5mv"] Feb 28 03:55:27 crc kubenswrapper[4624]: I0228 03:55:27.905274 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7954bdb9b9-dwqsd" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": dial tcp 10.217.0.157:9696: connect: connection refused" Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.192365 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b4965c79c-gh5mv" event={"ID":"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e","Type":"ContainerStarted","Data":"99e04846e2b8f1fa5e455a03bb45738a53f84563dd7cd360e612b19cca436b5d"} Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.192443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b4965c79c-gh5mv" event={"ID":"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e","Type":"ContainerStarted","Data":"8bdca05b0a6137f3775471e2dfc26ec67bc6b9804aba3562121a60525b970fa9"} Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.198307 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerID="fed657b96509faabeb0b6e18490ac2b7a15cdd03d6bb24cec2741fa935cd65d3" exitCode=0 Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.199439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954bdb9b9-dwqsd" event={"ID":"e5862b3f-4b92-4bcc-8d77-4585e53475a8","Type":"ContainerDied","Data":"fed657b96509faabeb0b6e18490ac2b7a15cdd03d6bb24cec2741fa935cd65d3"} Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.199620 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener-log" containerID="cri-o://8c63bf156015f7c145e8e225f906f2ed92cbe5d65c471bb91f916daccf77b6eb" gracePeriod=30 Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.200493 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-75c696b849-456ll" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker-log" containerID="cri-o://d2ca66697571e5f8206e57c11167ef607f623a8688e20234c1cbcba16ceb5a32" gracePeriod=30 Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.200836 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener" containerID="cri-o://67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204" gracePeriod=30 Feb 28 03:55:28 crc kubenswrapper[4624]: I0228 03:55:28.200913 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-75c696b849-456ll" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker" containerID="cri-o://8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808" gracePeriod=30 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.284962 4624 generic.go:334] "Generic (PLEG): container finished" podID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerID="5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a" exitCode=137 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.285672 4624 generic.go:334] "Generic (PLEG): container finished" podID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerID="87f6910ac28a0d1e4d386ad296e664e46c67ec2e729f80ec950b038a1b12e733" exitCode=137 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.285868 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5cbd76fc-29hwp" event={"ID":"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b","Type":"ContainerDied","Data":"5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.285912 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5cbd76fc-29hwp" event={"ID":"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b","Type":"ContainerDied","Data":"87f6910ac28a0d1e4d386ad296e664e46c67ec2e729f80ec950b038a1b12e733"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.295901 4624 generic.go:334] "Generic (PLEG): container finished" podID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerID="28426b33d1604a4113f611ec024970e510de3204f6c6077604b6d2a92d46ff21" exitCode=137 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.295947 4624 generic.go:334] "Generic (PLEG): container finished" podID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerID="cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6" exitCode=137 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.296017 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6697794ff-9xhd2" event={"ID":"d192312c-1396-4c6c-a687-b4ddfe356ded","Type":"ContainerDied","Data":"28426b33d1604a4113f611ec024970e510de3204f6c6077604b6d2a92d46ff21"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.296056 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6697794ff-9xhd2" event={"ID":"d192312c-1396-4c6c-a687-b4ddfe356ded","Type":"ContainerDied","Data":"cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.327375 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.354530 4624 generic.go:334] "Generic (PLEG): container finished" podID="c42c908a-802d-416b-a7de-066df6e008bd" containerID="d2ca66697571e5f8206e57c11167ef607f623a8688e20234c1cbcba16ceb5a32" exitCode=143 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.354640 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75c696b849-456ll" event={"ID":"c42c908a-802d-416b-a7de-066df6e008bd","Type":"ContainerDied","Data":"d2ca66697571e5f8206e57c11167ef607f623a8688e20234c1cbcba16ceb5a32"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.385574 4624 generic.go:334] "Generic (PLEG): container finished" podID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerID="8c63bf156015f7c145e8e225f906f2ed92cbe5d65c471bb91f916daccf77b6eb" exitCode=143 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.386051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" event={"ID":"32836bdc-f650-4a3a-b1f9-21de1a2992e3","Type":"ContainerDied","Data":"8c63bf156015f7c145e8e225f906f2ed92cbe5d65c471bb91f916daccf77b6eb"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.476667 4624 generic.go:334] "Generic (PLEG): container finished" podID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerID="2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44" exitCode=137 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.476805 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb4b6465-msstz" event={"ID":"69bc742a-a80d-43f4-90cc-993a14f7dbd5","Type":"ContainerDied","Data":"2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.522958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b4965c79c-gh5mv" event={"ID":"c6aa9707-50ce-40f7-a741-9dcfea4b1f8e","Type":"ContainerStarted","Data":"1398ce5861a9bb7c86c368a435f958450403ac892f5a41a37af033bf0ce76b88"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.525643 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.530474 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-config\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.530580 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-ovndb-tls-certs\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.530693 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-combined-ca-bundle\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.530750 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-internal-tls-certs\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.531109 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-public-tls-certs\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.531192 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxtbp\" (UniqueName: \"kubernetes.io/projected/e5862b3f-4b92-4bcc-8d77-4585e53475a8-kube-api-access-zxtbp\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.531266 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-httpd-config\") pod \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\" (UID: \"e5862b3f-4b92-4bcc-8d77-4585e53475a8\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.554397 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.567913 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5862b3f-4b92-4bcc-8d77-4585e53475a8-kube-api-access-zxtbp" (OuterVolumeSpecName: "kube-api-access-zxtbp") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "kube-api-access-zxtbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.579847 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerID="3bf863094fac0cc574e88584ad7aef70be00a5b7d815b10d280024c0b502eb20" exitCode=0 Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.582754 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954bdb9b9-dwqsd" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.583518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954bdb9b9-dwqsd" event={"ID":"e5862b3f-4b92-4bcc-8d77-4585e53475a8","Type":"ContainerDied","Data":"3bf863094fac0cc574e88584ad7aef70be00a5b7d815b10d280024c0b502eb20"} Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.583614 4624 scope.go:117] "RemoveContainer" containerID="fed657b96509faabeb0b6e18490ac2b7a15cdd03d6bb24cec2741fa935cd65d3" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.594103 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b4965c79c-gh5mv" podStartSLOduration=4.59405581 podStartE2EDuration="4.59405581s" podCreationTimestamp="2026-02-28 03:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:29.591038587 +0000 UTC m=+1184.255077896" watchObservedRunningTime="2026-02-28 03:55:29.59405581 +0000 UTC m=+1184.258095119" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.638409 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxtbp\" (UniqueName: \"kubernetes.io/projected/e5862b3f-4b92-4bcc-8d77-4585e53475a8-kube-api-access-zxtbp\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.638441 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: E0228 03:55:29.739961 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69bc742a_a80d_43f4_90cc_993a14f7dbd5.slice/crio-2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd192312c_1396_4c6c_a687_b4ddfe356ded.slice/crio-conmon-cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4468dd90_bd07_4cdf_8fc8_de0dcfff1c4b.slice/crio-conmon-87f6910ac28a0d1e4d386ad296e664e46c67ec2e729f80ec950b038a1b12e733.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4468dd90_bd07_4cdf_8fc8_de0dcfff1c4b.slice/crio-conmon-5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4468dd90_bd07_4cdf_8fc8_de0dcfff1c4b.slice/crio-5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd192312c_1396_4c6c_a687_b4ddfe356ded.slice/crio-cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69bc742a_a80d_43f4_90cc_993a14f7dbd5.slice/crio-conmon-2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44.scope\": RecentStats: unable to find data in memory cache]" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.768389 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-config" (OuterVolumeSpecName: "config") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.799192 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.817702 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.820931 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.844672 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.844967 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.845043 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.859723 4624 scope.go:117] "RemoveContainer" containerID="3bf863094fac0cc574e88584ad7aef70be00a5b7d815b10d280024c0b502eb20" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.880714 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.941246 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e5862b3f-4b92-4bcc-8d77-4585e53475a8" (UID: "e5862b3f-4b92-4bcc-8d77-4585e53475a8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.946787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-config-data\") pod \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.947049 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-logs\") pod \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.947229 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-horizon-secret-key\") pod \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.948218 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-scripts\") pod \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.948293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7sw\" (UniqueName: \"kubernetes.io/projected/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-kube-api-access-zt7sw\") pod \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\" (UID: \"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b\") " Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.949386 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.949423 4624 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5862b3f-4b92-4bcc-8d77-4585e53475a8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.963319 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-kube-api-access-zt7sw" (OuterVolumeSpecName: "kube-api-access-zt7sw") pod "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" (UID: "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b"). InnerVolumeSpecName "kube-api-access-zt7sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.968835 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-logs" (OuterVolumeSpecName: "logs") pod "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" (UID: "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:29 crc kubenswrapper[4624]: I0228 03:55:29.981889 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" (UID: "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.041225 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-config-data" (OuterVolumeSpecName: "config-data") pod "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" (UID: "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.055360 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt7sw\" (UniqueName: \"kubernetes.io/projected/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-kube-api-access-zt7sw\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.055407 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.055417 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.055427 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.059799 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-scripts" (OuterVolumeSpecName: "scripts") pod "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" (UID: "4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.158585 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.174653 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.260194 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-scripts\") pod \"d192312c-1396-4c6c-a687-b4ddfe356ded\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.260290 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqx5s\" (UniqueName: \"kubernetes.io/projected/d192312c-1396-4c6c-a687-b4ddfe356ded-kube-api-access-tqx5s\") pod \"d192312c-1396-4c6c-a687-b4ddfe356ded\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.260414 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-config-data\") pod \"d192312c-1396-4c6c-a687-b4ddfe356ded\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.260498 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d192312c-1396-4c6c-a687-b4ddfe356ded-horizon-secret-key\") pod \"d192312c-1396-4c6c-a687-b4ddfe356ded\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.260609 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192312c-1396-4c6c-a687-b4ddfe356ded-logs\") pod \"d192312c-1396-4c6c-a687-b4ddfe356ded\" (UID: \"d192312c-1396-4c6c-a687-b4ddfe356ded\") " Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.261415 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d192312c-1396-4c6c-a687-b4ddfe356ded-logs" (OuterVolumeSpecName: "logs") pod "d192312c-1396-4c6c-a687-b4ddfe356ded" (UID: "d192312c-1396-4c6c-a687-b4ddfe356ded"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.291344 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192312c-1396-4c6c-a687-b4ddfe356ded-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d192312c-1396-4c6c-a687-b4ddfe356ded" (UID: "d192312c-1396-4c6c-a687-b4ddfe356ded"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.301332 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d192312c-1396-4c6c-a687-b4ddfe356ded-kube-api-access-tqx5s" (OuterVolumeSpecName: "kube-api-access-tqx5s") pod "d192312c-1396-4c6c-a687-b4ddfe356ded" (UID: "d192312c-1396-4c6c-a687-b4ddfe356ded"). InnerVolumeSpecName "kube-api-access-tqx5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.324691 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7954bdb9b9-dwqsd"] Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.347106 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7954bdb9b9-dwqsd"] Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.366258 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192312c-1396-4c6c-a687-b4ddfe356ded-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.366294 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqx5s\" (UniqueName: \"kubernetes.io/projected/d192312c-1396-4c6c-a687-b4ddfe356ded-kube-api-access-tqx5s\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.366307 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d192312c-1396-4c6c-a687-b4ddfe356ded-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.390587 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-config-data" (OuterVolumeSpecName: "config-data") pod "d192312c-1396-4c6c-a687-b4ddfe356ded" (UID: "d192312c-1396-4c6c-a687-b4ddfe356ded"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.453368 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-scripts" (OuterVolumeSpecName: "scripts") pod "d192312c-1396-4c6c-a687-b4ddfe356ded" (UID: "d192312c-1396-4c6c-a687-b4ddfe356ded"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.476184 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.476237 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d192312c-1396-4c6c-a687-b4ddfe356ded-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.622051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6697794ff-9xhd2" event={"ID":"d192312c-1396-4c6c-a687-b4ddfe356ded","Type":"ContainerDied","Data":"950ef5aff74ba54a72f723f5177952ecd245b60824ee05b8fd0b7f3e9f0a3324"} Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.622136 4624 scope.go:117] "RemoveContainer" containerID="28426b33d1604a4113f611ec024970e510de3204f6c6077604b6d2a92d46ff21" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.622245 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6697794ff-9xhd2" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.642911 4624 generic.go:334] "Generic (PLEG): container finished" podID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerID="0ff56fa1719908852e8484a7ceaa2041362f9a20c8610502428f676a606e3435" exitCode=137 Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.643015 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb4b6465-msstz" event={"ID":"69bc742a-a80d-43f4-90cc-993a14f7dbd5","Type":"ContainerDied","Data":"0ff56fa1719908852e8484a7ceaa2041362f9a20c8610502428f676a606e3435"} Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.681120 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6697794ff-9xhd2"] Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.691185 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5cbd76fc-29hwp" event={"ID":"4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b","Type":"ContainerDied","Data":"aec2d609b63609b4f139d51ecba81744fdb962692cfd2ee8f87912753535e322"} Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.691289 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5cbd76fc-29hwp" Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.715979 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6697794ff-9xhd2"] Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.758540 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5cbd76fc-29hwp"] Feb 28 03:55:30 crc kubenswrapper[4624]: I0228 03:55:30.775219 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c5cbd76fc-29hwp"] Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.010049 4624 scope.go:117] "RemoveContainer" containerID="cbc953a1abdb84e316ab71eb96c9e1a8d5914b63a5fbfd8eedc7bf59086deac6" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.162643 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.168389 4624 scope.go:117] "RemoveContainer" containerID="5bc92e779fe979e448b4b07329dbe23e36916906253a6d2a42b8ad012829670a" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.196886 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-config-data\") pod \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.196985 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/69bc742a-a80d-43f4-90cc-993a14f7dbd5-horizon-secret-key\") pod \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.197125 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-scripts\") pod \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.197179 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc742a-a80d-43f4-90cc-993a14f7dbd5-logs\") pod \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.197292 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5htrv\" (UniqueName: \"kubernetes.io/projected/69bc742a-a80d-43f4-90cc-993a14f7dbd5-kube-api-access-5htrv\") pod \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\" (UID: \"69bc742a-a80d-43f4-90cc-993a14f7dbd5\") " Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.204260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69bc742a-a80d-43f4-90cc-993a14f7dbd5-logs" (OuterVolumeSpecName: "logs") pod "69bc742a-a80d-43f4-90cc-993a14f7dbd5" (UID: "69bc742a-a80d-43f4-90cc-993a14f7dbd5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.216610 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc742a-a80d-43f4-90cc-993a14f7dbd5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "69bc742a-a80d-43f4-90cc-993a14f7dbd5" (UID: "69bc742a-a80d-43f4-90cc-993a14f7dbd5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.228452 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bc742a-a80d-43f4-90cc-993a14f7dbd5-kube-api-access-5htrv" (OuterVolumeSpecName: "kube-api-access-5htrv") pod "69bc742a-a80d-43f4-90cc-993a14f7dbd5" (UID: "69bc742a-a80d-43f4-90cc-993a14f7dbd5"). InnerVolumeSpecName "kube-api-access-5htrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.294207 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-config-data" (OuterVolumeSpecName: "config-data") pod "69bc742a-a80d-43f4-90cc-993a14f7dbd5" (UID: "69bc742a-a80d-43f4-90cc-993a14f7dbd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.300435 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.300484 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/69bc742a-a80d-43f4-90cc-993a14f7dbd5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.300500 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc742a-a80d-43f4-90cc-993a14f7dbd5-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.300512 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5htrv\" (UniqueName: \"kubernetes.io/projected/69bc742a-a80d-43f4-90cc-993a14f7dbd5-kube-api-access-5htrv\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.312677 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-scripts" (OuterVolumeSpecName: "scripts") pod "69bc742a-a80d-43f4-90cc-993a14f7dbd5" (UID: "69bc742a-a80d-43f4-90cc-993a14f7dbd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.404153 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69bc742a-a80d-43f4-90cc-993a14f7dbd5-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.517856 4624 scope.go:117] "RemoveContainer" containerID="87f6910ac28a0d1e4d386ad296e664e46c67ec2e729f80ec950b038a1b12e733" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.745427 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb4b6465-msstz" event={"ID":"69bc742a-a80d-43f4-90cc-993a14f7dbd5","Type":"ContainerDied","Data":"730e233069e061c1ca7dbb270746bbe2e9aecab01a326ca2e4236a7a75aace91"} Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.745456 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb4b6465-msstz" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.745507 4624 scope.go:117] "RemoveContainer" containerID="0ff56fa1719908852e8484a7ceaa2041362f9a20c8610502428f676a606e3435" Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.825928 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78cb4b6465-msstz"] Feb 28 03:55:31 crc kubenswrapper[4624]: I0228 03:55:31.826272 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78cb4b6465-msstz"] Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.053465 4624 scope.go:117] "RemoveContainer" containerID="2fe62e4e861effafde0f8ad42b6cbaf54cc02e8de54add21411c30fa632e6f44" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.135214 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" path="/var/lib/kubelet/pods/4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b/volumes" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.154872 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" path="/var/lib/kubelet/pods/69bc742a-a80d-43f4-90cc-993a14f7dbd5/volumes" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.155762 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" path="/var/lib/kubelet/pods/d192312c-1396-4c6c-a687-b4ddfe356ded/volumes" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.179648 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" path="/var/lib/kubelet/pods/e5862b3f-4b92-4bcc-8d77-4585e53475a8/volumes" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.268709 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.311615 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.474262 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.578154 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-w7g2j"] Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.578978 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerName="dnsmasq-dns" containerID="cri-o://638710381bd4a8ec11be2f45faaa4542107b3f946b7568f2a28d0910c2be9e74" gracePeriod=10 Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.799184 4624 generic.go:334] "Generic (PLEG): container finished" podID="0d169c37-cd26-4e66-8f96-d0a53a96d616" containerID="32dbac6a5ac9e1eb0c01d8030240c36e5ebd1d139f4bc5b4feb71e4a28f1bc46" exitCode=0 Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.799694 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8tlzl" event={"ID":"0d169c37-cd26-4e66-8f96-d0a53a96d616","Type":"ContainerDied","Data":"32dbac6a5ac9e1eb0c01d8030240c36e5ebd1d139f4bc5b4feb71e4a28f1bc46"} Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.828001 4624 generic.go:334] "Generic (PLEG): container finished" podID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerID="638710381bd4a8ec11be2f45faaa4542107b3f946b7568f2a28d0910c2be9e74" exitCode=0 Feb 28 03:55:32 crc kubenswrapper[4624]: I0228 03:55:32.828127 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" event={"ID":"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0","Type":"ContainerDied","Data":"638710381bd4a8ec11be2f45faaa4542107b3f946b7568f2a28d0910c2be9e74"} Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.326441 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.326934 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.497011 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.593899 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-nb\") pod \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.593986 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-sb\") pod \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.594167 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-config\") pod \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.594304 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-swift-storage-0\") pod \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.594342 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-svc\") pod \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.594416 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9nt4\" (UniqueName: \"kubernetes.io/projected/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-kube-api-access-h9nt4\") pod \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\" (UID: \"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0\") " Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.620916 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-kube-api-access-h9nt4" (OuterVolumeSpecName: "kube-api-access-h9nt4") pod "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" (UID: "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0"). InnerVolumeSpecName "kube-api-access-h9nt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.694977 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" (UID: "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.696479 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9nt4\" (UniqueName: \"kubernetes.io/projected/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-kube-api-access-h9nt4\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.696506 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.697379 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" (UID: "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.713638 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" (UID: "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.733788 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-config" (OuterVolumeSpecName: "config") pod "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" (UID: "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.744780 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" (UID: "3924a0f6-ef65-46cb-a41c-8bcc27ad87a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.799949 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.800215 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.800230 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.800241 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.862160 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.862222 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-w7g2j" event={"ID":"3924a0f6-ef65-46cb-a41c-8bcc27ad87a0","Type":"ContainerDied","Data":"35a060c1c8e9c4b3045b90d486b2afef056ab50fb25cef2de3a2d4ec1d709d85"} Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.862281 4624 scope.go:117] "RemoveContainer" containerID="638710381bd4a8ec11be2f45faaa4542107b3f946b7568f2a28d0910c2be9e74" Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.922172 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-w7g2j"] Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.932249 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-w7g2j"] Feb 28 03:55:33 crc kubenswrapper[4624]: I0228 03:55:33.970003 4624 scope.go:117] "RemoveContainer" containerID="300ad89ebdfc43ba6353036a3e2086a83c22deec533c32917dbcd16d1ab6be93" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.113616 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" path="/var/lib/kubelet/pods/3924a0f6-ef65-46cb-a41c-8bcc27ad87a0/volumes" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.146415 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.146539 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.148029 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163"} pod="openstack/horizon-5b4bc59cd8-fkd4p" containerMessage="Container horizon failed startup probe, will be restarted" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.148119 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" containerID="cri-o://0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163" gracePeriod=30 Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.322470 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.322904 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.323756 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43"} pod="openstack/horizon-6cc988c5cd-svksm" containerMessage="Container horizon failed startup probe, will be restarted" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.323822 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" containerID="cri-o://5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43" gracePeriod=30 Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.501937 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.628058 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtx6c\" (UniqueName: \"kubernetes.io/projected/0d169c37-cd26-4e66-8f96-d0a53a96d616-kube-api-access-rtx6c\") pod \"0d169c37-cd26-4e66-8f96-d0a53a96d616\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.628143 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d169c37-cd26-4e66-8f96-d0a53a96d616-etc-machine-id\") pod \"0d169c37-cd26-4e66-8f96-d0a53a96d616\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.628197 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-db-sync-config-data\") pod \"0d169c37-cd26-4e66-8f96-d0a53a96d616\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.628221 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-combined-ca-bundle\") pod \"0d169c37-cd26-4e66-8f96-d0a53a96d616\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.628250 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-scripts\") pod \"0d169c37-cd26-4e66-8f96-d0a53a96d616\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.628265 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-config-data\") pod \"0d169c37-cd26-4e66-8f96-d0a53a96d616\" (UID: \"0d169c37-cd26-4e66-8f96-d0a53a96d616\") " Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.629798 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d169c37-cd26-4e66-8f96-d0a53a96d616-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d169c37-cd26-4e66-8f96-d0a53a96d616" (UID: "0d169c37-cd26-4e66-8f96-d0a53a96d616"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.649263 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-scripts" (OuterVolumeSpecName: "scripts") pod "0d169c37-cd26-4e66-8f96-d0a53a96d616" (UID: "0d169c37-cd26-4e66-8f96-d0a53a96d616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.655323 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0d169c37-cd26-4e66-8f96-d0a53a96d616" (UID: "0d169c37-cd26-4e66-8f96-d0a53a96d616"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.675830 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d169c37-cd26-4e66-8f96-d0a53a96d616-kube-api-access-rtx6c" (OuterVolumeSpecName: "kube-api-access-rtx6c") pod "0d169c37-cd26-4e66-8f96-d0a53a96d616" (UID: "0d169c37-cd26-4e66-8f96-d0a53a96d616"). InnerVolumeSpecName "kube-api-access-rtx6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.699698 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d169c37-cd26-4e66-8f96-d0a53a96d616" (UID: "0d169c37-cd26-4e66-8f96-d0a53a96d616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.731368 4624 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.731726 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.731792 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.731862 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtx6c\" (UniqueName: \"kubernetes.io/projected/0d169c37-cd26-4e66-8f96-d0a53a96d616-kube-api-access-rtx6c\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.731986 4624 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d169c37-cd26-4e66-8f96-d0a53a96d616-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.747601 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-config-data" (OuterVolumeSpecName: "config-data") pod "0d169c37-cd26-4e66-8f96-d0a53a96d616" (UID: "0d169c37-cd26-4e66-8f96-d0a53a96d616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.833575 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d169c37-cd26-4e66-8f96-d0a53a96d616-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.899823 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8tlzl" event={"ID":"0d169c37-cd26-4e66-8f96-d0a53a96d616","Type":"ContainerDied","Data":"ee223a4da91292046167a5001d561429aa9460697391d4a371893af1cc26b2aa"} Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.899878 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8tlzl" Feb 28 03:55:34 crc kubenswrapper[4624]: I0228 03:55:34.899901 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee223a4da91292046167a5001d561429aa9460697391d4a371893af1cc26b2aa" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.222481 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223429 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223455 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223473 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223481 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223488 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d169c37-cd26-4e66-8f96-d0a53a96d616" containerName="cinder-db-sync" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223495 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d169c37-cd26-4e66-8f96-d0a53a96d616" containerName="cinder-db-sync" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223507 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223513 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223528 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223535 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223567 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-api" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223575 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-api" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223593 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerName="dnsmasq-dns" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223600 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerName="dnsmasq-dns" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223610 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-httpd" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223617 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-httpd" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223626 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223632 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223641 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223647 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: E0228 03:55:35.223657 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerName="init" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.223664 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerName="init" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224502 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224544 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bc742a-a80d-43f4-90cc-993a14f7dbd5" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224557 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d169c37-cd26-4e66-8f96-d0a53a96d616" containerName="cinder-db-sync" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224569 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-httpd" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224581 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5862b3f-4b92-4bcc-8d77-4585e53475a8" containerName="neutron-api" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224593 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224603 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192312c-1396-4c6c-a687-b4ddfe356ded" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224612 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3924a0f6-ef65-46cb-a41c-8bcc27ad87a0" containerName="dnsmasq-dns" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224623 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon-log" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.224634 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4468dd90-bd07-4cdf-8fc8-de0dcfff1c4b" containerName="horizon" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.225888 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.231062 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fn79l" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.231133 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.233967 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.234236 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.260236 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.260342 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.260382 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.260417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxw6c\" (UniqueName: \"kubernetes.io/projected/3388a3ea-259d-4648-a047-3f9c896f8264-kube-api-access-rxw6c\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.260472 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3388a3ea-259d-4648-a047-3f9c896f8264-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.260562 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-scripts\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.283266 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.322279 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-z26db"] Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.324370 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362446 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362519 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362565 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-config\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362589 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362618 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxw6c\" (UniqueName: \"kubernetes.io/projected/3388a3ea-259d-4648-a047-3f9c896f8264-kube-api-access-rxw6c\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362648 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362684 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3388a3ea-259d-4648-a047-3f9c896f8264-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362723 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7dn\" (UniqueName: \"kubernetes.io/projected/be61ab6a-7cb4-40a0-9658-0c58aaeba834-kube-api-access-hg7dn\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362755 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362773 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362795 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-scripts\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.362821 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.366839 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3388a3ea-259d-4648-a047-3f9c896f8264-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.374836 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-scripts\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.380644 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.388062 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.394865 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.425347 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-z26db"] Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.467821 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7dn\" (UniqueName: \"kubernetes.io/projected/be61ab6a-7cb4-40a0-9658-0c58aaeba834-kube-api-access-hg7dn\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.467877 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.467901 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.467933 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.469365 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.469816 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.477635 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-config\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.477729 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.478528 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.479068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-config\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.480053 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.508241 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.508377 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.527484 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxw6c\" (UniqueName: \"kubernetes.io/projected/3388a3ea-259d-4648-a047-3f9c896f8264-kube-api-access-rxw6c\") pod \"cinder-scheduler-0\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.544271 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7dn\" (UniqueName: \"kubernetes.io/projected/be61ab6a-7cb4-40a0-9658-0c58aaeba834-kube-api-access-hg7dn\") pod \"dnsmasq-dns-6bb4fc677f-z26db\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.573287 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.663808 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.809692 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.812051 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.827794 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.840307 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894376 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-scripts\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894451 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894479 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63a8f10e-9109-4a57-b870-9f337557365d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894509 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894526 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfwzr\" (UniqueName: \"kubernetes.io/projected/63a8f10e-9109-4a57-b870-9f337557365d-kube-api-access-cfwzr\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data-custom\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.894649 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a8f10e-9109-4a57-b870-9f337557365d-logs\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.998792 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data-custom\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.999276 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a8f10e-9109-4a57-b870-9f337557365d-logs\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.999431 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-scripts\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.999674 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.999789 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63a8f10e-9109-4a57-b870-9f337557365d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.999878 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:35 crc kubenswrapper[4624]: I0228 03:55:35.999953 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfwzr\" (UniqueName: \"kubernetes.io/projected/63a8f10e-9109-4a57-b870-9f337557365d-kube-api-access-cfwzr\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.000836 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a8f10e-9109-4a57-b870-9f337557365d-logs\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.000918 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63a8f10e-9109-4a57-b870-9f337557365d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.022484 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.025725 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data-custom\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.027917 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.034221 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-scripts\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.063513 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfwzr\" (UniqueName: \"kubernetes.io/projected/63a8f10e-9109-4a57-b870-9f337557365d-kube-api-access-cfwzr\") pod \"cinder-api-0\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.148988 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.302889 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.303480 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.468639 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.469148 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.741929 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:55:36 crc kubenswrapper[4624]: I0228 03:55:36.915929 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-z26db"] Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.081204 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.255055 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.311305 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.353308 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.462308 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.462865 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:37 crc kubenswrapper[4624]: I0228 03:55:37.893742 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 03:55:38 crc kubenswrapper[4624]: I0228 03:55:38.409416 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:38 crc kubenswrapper[4624]: I0228 03:55:38.409475 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:38 crc kubenswrapper[4624]: I0228 03:55:38.447200 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:38 crc kubenswrapper[4624]: I0228 03:55:38.463318 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:39 crc kubenswrapper[4624]: I0228 03:55:39.981723 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:55:41 crc kubenswrapper[4624]: I0228 03:55:41.478958 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:41 crc kubenswrapper[4624]: I0228 03:55:41.478885 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.451831 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.472407 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.472910 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-697469cdb8-v44r2" podUID="413c221c-acb0-4f2d-9621-b5bd0cdc14a5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.496659 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.508978 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-697469cdb8-v44r2" Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.739838 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b6cbb6d88-bsx2h"] Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.742495 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" containerID="cri-o://221a2438ea7f8034dd34b857c660b9c4e5cd5735c4d43f7a3120fcba1166e62b" gracePeriod=30 Feb 28 03:55:42 crc kubenswrapper[4624]: I0228 03:55:42.743141 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" containerID="cri-o://2088cea6906d1eaac4a8ed04afc2296f10c98a6e248003c7a5cb2ef2ab1af75b" gracePeriod=30 Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.275445 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.557966 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-86b4894974-wxqfg"] Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.560253 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.620460 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-86b4894974-wxqfg"] Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.675773 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-combined-ca-bundle\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.675864 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0391f882-2f7a-47e9-b4f2-b640e146e079-logs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.675929 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-config-data\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.675960 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-internal-tls-certs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.676326 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-scripts\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.676348 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-public-tls-certs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.676546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdjcq\" (UniqueName: \"kubernetes.io/projected/0391f882-2f7a-47e9-b4f2-b640e146e079-kube-api-access-bdjcq\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-combined-ca-bundle\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779744 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0391f882-2f7a-47e9-b4f2-b640e146e079-logs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-config-data\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-internal-tls-certs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779872 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-scripts\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779893 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-public-tls-certs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.779932 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdjcq\" (UniqueName: \"kubernetes.io/projected/0391f882-2f7a-47e9-b4f2-b640e146e079-kube-api-access-bdjcq\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.788956 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0391f882-2f7a-47e9-b4f2-b640e146e079-logs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.795154 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-config-data\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.797571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-public-tls-certs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.799444 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-scripts\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.800965 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-internal-tls-certs\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.811394 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391f882-2f7a-47e9-b4f2-b640e146e079-combined-ca-bundle\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.813448 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdjcq\" (UniqueName: \"kubernetes.io/projected/0391f882-2f7a-47e9-b4f2-b640e146e079-kube-api-access-bdjcq\") pod \"placement-86b4894974-wxqfg\" (UID: \"0391f882-2f7a-47e9-b4f2-b640e146e079\") " pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:43 crc kubenswrapper[4624]: I0228 03:55:43.896292 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:44 crc kubenswrapper[4624]: I0228 03:55:44.102941 4624 generic.go:334] "Generic (PLEG): container finished" podID="ae632f24-f74a-413a-9835-599c21020eb5" containerID="221a2438ea7f8034dd34b857c660b9c4e5cd5735c4d43f7a3120fcba1166e62b" exitCode=143 Feb 28 03:55:44 crc kubenswrapper[4624]: I0228 03:55:44.106076 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" event={"ID":"ae632f24-f74a-413a-9835-599c21020eb5","Type":"ContainerDied","Data":"221a2438ea7f8034dd34b857c660b9c4e5cd5735c4d43f7a3120fcba1166e62b"} Feb 28 03:55:48 crc kubenswrapper[4624]: I0228 03:55:48.178783 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63a8f10e-9109-4a57-b870-9f337557365d","Type":"ContainerStarted","Data":"09e4587a3582dbfe042c079ac49ca5b52c46ecc5fc6edf58567365b47bfb0d21"} Feb 28 03:55:48 crc kubenswrapper[4624]: I0228 03:55:48.307482 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:48 crc kubenswrapper[4624]: I0228 03:55:48.307539 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:55:48 crc kubenswrapper[4624]: I0228 03:55:48.398617 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:37326->10.217.0.167:9311: read: connection reset by peer" Feb 28 03:55:48 crc kubenswrapper[4624]: I0228 03:55:48.399054 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:37328->10.217.0.167:9311: read: connection reset by peer" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.209496 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" event={"ID":"be61ab6a-7cb4-40a0-9658-0c58aaeba834","Type":"ContainerStarted","Data":"2cf47ca7acbfbf57e67efeb763f7d13af68e1efcb0c97546794ca4b90f4cafd7"} Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.238322 4624 generic.go:334] "Generic (PLEG): container finished" podID="ae632f24-f74a-413a-9835-599c21020eb5" containerID="2088cea6906d1eaac4a8ed04afc2296f10c98a6e248003c7a5cb2ef2ab1af75b" exitCode=0 Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.238470 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" event={"ID":"ae632f24-f74a-413a-9835-599c21020eb5","Type":"ContainerDied","Data":"2088cea6906d1eaac4a8ed04afc2296f10c98a6e248003c7a5cb2ef2ab1af75b"} Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.254240 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3388a3ea-259d-4648-a047-3f9c896f8264","Type":"ContainerStarted","Data":"b91f7c15122c4ee25b3d1b8cdeba20448e8ad2ab19595702f1c2179e3e249786"} Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.261039 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f48865754-rdngs" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.539939 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.540040 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.540178 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.540881 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.541280 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59b30a81dc689f74ba07a6866eb43af4d862d6a65c377ecf21944e761adfa908"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.541380 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://59b30a81dc689f74ba07a6866eb43af4d862d6a65c377ecf21944e761adfa908" gracePeriod=600 Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.542429 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.550275 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.550996 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-z8v89" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.551013 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.555072 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa3966ee-e42d-4dfe-a730-978481d7f497-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.555167 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa3966ee-e42d-4dfe-a730-978481d7f497-openstack-config\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.555415 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2nm\" (UniqueName: \"kubernetes.io/projected/fa3966ee-e42d-4dfe-a730-978481d7f497-kube-api-access-6x2nm\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.555497 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3966ee-e42d-4dfe-a730-978481d7f497-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.586996 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.659751 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x2nm\" (UniqueName: \"kubernetes.io/projected/fa3966ee-e42d-4dfe-a730-978481d7f497-kube-api-access-6x2nm\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.659886 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3966ee-e42d-4dfe-a730-978481d7f497-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.659967 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa3966ee-e42d-4dfe-a730-978481d7f497-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.660002 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa3966ee-e42d-4dfe-a730-978481d7f497-openstack-config\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.660865 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa3966ee-e42d-4dfe-a730-978481d7f497-openstack-config\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.667073 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3966ee-e42d-4dfe-a730-978481d7f497-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.679844 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa3966ee-e42d-4dfe-a730-978481d7f497-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.692680 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x2nm\" (UniqueName: \"kubernetes.io/projected/fa3966ee-e42d-4dfe-a730-978481d7f497-kube-api-access-6x2nm\") pod \"openstackclient\" (UID: \"fa3966ee-e42d-4dfe-a730-978481d7f497\") " pod="openstack/openstackclient" Feb 28 03:55:49 crc kubenswrapper[4624]: I0228 03:55:49.885880 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.277221 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="59b30a81dc689f74ba07a6866eb43af4d862d6a65c377ecf21944e761adfa908" exitCode=0 Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.277527 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"59b30a81dc689f74ba07a6866eb43af4d862d6a65c377ecf21944e761adfa908"} Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.277574 4624 scope.go:117] "RemoveContainer" containerID="5ae4a4e8e6c778ba7c9f4e2d8ca7006770f5fd2af20468097f12f94d4858478d" Feb 28 03:55:50 crc kubenswrapper[4624]: E0228 03:55:50.325902 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 28 03:55:50 crc kubenswrapper[4624]: E0228 03:55:50.326308 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fhqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2bf41691-ef23-4f33-83f0-ebd9c2ca1d87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 03:55:50 crc kubenswrapper[4624]: E0228 03:55:50.327816 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.641320 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.684486 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data\") pod \"ae632f24-f74a-413a-9835-599c21020eb5\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.684680 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-combined-ca-bundle\") pod \"ae632f24-f74a-413a-9835-599c21020eb5\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.684857 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data-custom\") pod \"ae632f24-f74a-413a-9835-599c21020eb5\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.684957 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwjz\" (UniqueName: \"kubernetes.io/projected/ae632f24-f74a-413a-9835-599c21020eb5-kube-api-access-nqwjz\") pod \"ae632f24-f74a-413a-9835-599c21020eb5\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.684994 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae632f24-f74a-413a-9835-599c21020eb5-logs\") pod \"ae632f24-f74a-413a-9835-599c21020eb5\" (UID: \"ae632f24-f74a-413a-9835-599c21020eb5\") " Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.686024 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae632f24-f74a-413a-9835-599c21020eb5-logs" (OuterVolumeSpecName: "logs") pod "ae632f24-f74a-413a-9835-599c21020eb5" (UID: "ae632f24-f74a-413a-9835-599c21020eb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.702490 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae632f24-f74a-413a-9835-599c21020eb5" (UID: "ae632f24-f74a-413a-9835-599c21020eb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.716182 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae632f24-f74a-413a-9835-599c21020eb5-kube-api-access-nqwjz" (OuterVolumeSpecName: "kube-api-access-nqwjz") pod "ae632f24-f74a-413a-9835-599c21020eb5" (UID: "ae632f24-f74a-413a-9835-599c21020eb5"). InnerVolumeSpecName "kube-api-access-nqwjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.792796 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.792831 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwjz\" (UniqueName: \"kubernetes.io/projected/ae632f24-f74a-413a-9835-599c21020eb5-kube-api-access-nqwjz\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.792847 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae632f24-f74a-413a-9835-599c21020eb5-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.831340 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae632f24-f74a-413a-9835-599c21020eb5" (UID: "ae632f24-f74a-413a-9835-599c21020eb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.891534 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data" (OuterVolumeSpecName: "config-data") pod "ae632f24-f74a-413a-9835-599c21020eb5" (UID: "ae632f24-f74a-413a-9835-599c21020eb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.895764 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:50 crc kubenswrapper[4624]: I0228 03:55:50.895805 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae632f24-f74a-413a-9835-599c21020eb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.053641 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.079042 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-86b4894974-wxqfg"] Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.305261 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" event={"ID":"ae632f24-f74a-413a-9835-599c21020eb5","Type":"ContainerDied","Data":"7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3"} Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.306052 4624 scope.go:117] "RemoveContainer" containerID="2088cea6906d1eaac4a8ed04afc2296f10c98a6e248003c7a5cb2ef2ab1af75b" Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.305529 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b6cbb6d88-bsx2h" Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.327433 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86b4894974-wxqfg" event={"ID":"0391f882-2f7a-47e9-b4f2-b640e146e079","Type":"ContainerStarted","Data":"e5ef7fd4bfdbd97dd290b7d8f4342cd57e5d4c5c999c44cebcc7311a01927537"} Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.395935 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa3966ee-e42d-4dfe-a730-978481d7f497","Type":"ContainerStarted","Data":"296b8db60ce95f2ebf4143e1cb91056ad04836f4268c17e724272a3aca68cbae"} Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.422403 4624 generic.go:334] "Generic (PLEG): container finished" podID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerID="ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9" exitCode=0 Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.422668 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="ceilometer-notification-agent" containerID="cri-o://42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff" gracePeriod=30 Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.423965 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" event={"ID":"be61ab6a-7cb4-40a0-9658-0c58aaeba834","Type":"ContainerDied","Data":"ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9"} Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.424564 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="sg-core" containerID="cri-o://c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a" gracePeriod=30 Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.663114 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b6cbb6d88-bsx2h"] Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.681452 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b6cbb6d88-bsx2h"] Feb 28 03:55:51 crc kubenswrapper[4624]: I0228 03:55:51.748419 4624 scope.go:117] "RemoveContainer" containerID="221a2438ea7f8034dd34b857c660b9c4e5cd5735c4d43f7a3120fcba1166e62b" Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.118595 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae632f24-f74a-413a-9835-599c21020eb5" path="/var/lib/kubelet/pods/ae632f24-f74a-413a-9835-599c21020eb5/volumes" Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.444980 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63a8f10e-9109-4a57-b870-9f337557365d","Type":"ContainerStarted","Data":"382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd"} Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.453302 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" event={"ID":"be61ab6a-7cb4-40a0-9658-0c58aaeba834","Type":"ContainerStarted","Data":"b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21"} Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.456375 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.501703 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86b4894974-wxqfg" event={"ID":"0391f882-2f7a-47e9-b4f2-b640e146e079","Type":"ContainerStarted","Data":"f131bbab442e10bac758f38224557bcdb041df779562f44ca666018f61d10b21"} Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.510386 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.510443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86b4894974-wxqfg" event={"ID":"0391f882-2f7a-47e9-b4f2-b640e146e079","Type":"ContainerStarted","Data":"3ddf543a7ab42ef02f05e3d1921b2285489496f0f66e371a9db04c6e3cef4e3d"} Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.510497 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.520776 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" podStartSLOduration=17.520747898 podStartE2EDuration="17.520747898s" podCreationTimestamp="2026-02-28 03:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:52.489303535 +0000 UTC m=+1207.153342844" watchObservedRunningTime="2026-02-28 03:55:52.520747898 +0000 UTC m=+1207.184787207" Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.535743 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133"} Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.563700 4624 generic.go:334] "Generic (PLEG): container finished" podID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerID="c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a" exitCode=2 Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.563757 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87","Type":"ContainerDied","Data":"c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a"} Feb 28 03:55:52 crc kubenswrapper[4624]: I0228 03:55:52.592239 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-86b4894974-wxqfg" podStartSLOduration=9.592208139 podStartE2EDuration="9.592208139s" podCreationTimestamp="2026-02-28 03:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:52.576527093 +0000 UTC m=+1207.240566402" watchObservedRunningTime="2026-02-28 03:55:52.592208139 +0000 UTC m=+1207.256247448" Feb 28 03:55:53 crc kubenswrapper[4624]: I0228 03:55:53.578358 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63a8f10e-9109-4a57-b870-9f337557365d","Type":"ContainerStarted","Data":"e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985"} Feb 28 03:55:53 crc kubenswrapper[4624]: I0228 03:55:53.579192 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api-log" containerID="cri-o://382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd" gracePeriod=30 Feb 28 03:55:53 crc kubenswrapper[4624]: I0228 03:55:53.579523 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 03:55:53 crc kubenswrapper[4624]: I0228 03:55:53.579902 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api" containerID="cri-o://e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985" gracePeriod=30 Feb 28 03:55:53 crc kubenswrapper[4624]: I0228 03:55:53.585410 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3388a3ea-259d-4648-a047-3f9c896f8264","Type":"ContainerStarted","Data":"5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61"} Feb 28 03:55:53 crc kubenswrapper[4624]: I0228 03:55:53.614041 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=18.614020754 podStartE2EDuration="18.614020754s" podCreationTimestamp="2026-02-28 03:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:53.60795704 +0000 UTC m=+1208.271996349" watchObservedRunningTime="2026-02-28 03:55:53.614020754 +0000 UTC m=+1208.278060063" Feb 28 03:55:54 crc kubenswrapper[4624]: I0228 03:55:54.599467 4624 generic.go:334] "Generic (PLEG): container finished" podID="63a8f10e-9109-4a57-b870-9f337557365d" containerID="382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd" exitCode=143 Feb 28 03:55:54 crc kubenswrapper[4624]: I0228 03:55:54.599850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63a8f10e-9109-4a57-b870-9f337557365d","Type":"ContainerDied","Data":"382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd"} Feb 28 03:55:54 crc kubenswrapper[4624]: I0228 03:55:54.605609 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3388a3ea-259d-4648-a047-3f9c896f8264","Type":"ContainerStarted","Data":"fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a"} Feb 28 03:55:54 crc kubenswrapper[4624]: I0228 03:55:54.628029 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=17.114432996 podStartE2EDuration="19.628000176s" podCreationTimestamp="2026-02-28 03:55:35 +0000 UTC" firstStartedPulling="2026-02-28 03:55:48.815554913 +0000 UTC m=+1203.479594222" lastFinishedPulling="2026-02-28 03:55:51.329122093 +0000 UTC m=+1205.993161402" observedRunningTime="2026-02-28 03:55:54.626561707 +0000 UTC m=+1209.290601026" watchObservedRunningTime="2026-02-28 03:55:54.628000176 +0000 UTC m=+1209.292039495" Feb 28 03:55:55 crc kubenswrapper[4624]: I0228 03:55:55.574286 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 03:55:55 crc kubenswrapper[4624]: I0228 03:55:55.617769 4624 generic.go:334] "Generic (PLEG): container finished" podID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerID="42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff" exitCode=0 Feb 28 03:55:55 crc kubenswrapper[4624]: I0228 03:55:55.619486 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87","Type":"ContainerDied","Data":"42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff"} Feb 28 03:55:55 crc kubenswrapper[4624]: I0228 03:55:55.939455 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005296 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fhqm\" (UniqueName: \"kubernetes.io/projected/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-kube-api-access-2fhqm\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005377 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-combined-ca-bundle\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005462 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-run-httpd\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005495 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-scripts\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-sg-core-conf-yaml\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005627 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-config-data\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.005663 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-log-httpd\") pod \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\" (UID: \"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87\") " Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.006225 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.006624 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.007652 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.007683 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.017070 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-kube-api-access-2fhqm" (OuterVolumeSpecName: "kube-api-access-2fhqm") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "kube-api-access-2fhqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.036286 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-scripts" (OuterVolumeSpecName: "scripts") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.048552 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.069665 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.080345 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-config-data" (OuterVolumeSpecName: "config-data") pod "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" (UID: "2bf41691-ef23-4f33-83f0-ebd9c2ca1d87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.109415 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fhqm\" (UniqueName: \"kubernetes.io/projected/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-kube-api-access-2fhqm\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.109843 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.109944 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.110006 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.113056 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.480504 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b4965c79c-gh5mv" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.587421 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b75fb948d-dzc9p"] Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.589701 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b75fb948d-dzc9p" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-api" containerID="cri-o://f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072" gracePeriod=30 Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.590010 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b75fb948d-dzc9p" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-httpd" containerID="cri-o://0299974817a49a23813b6a4e0541439b33a4eb4cdaf556cd7a52e1dc71f6dd3e" gracePeriod=30 Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.645224 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bf41691-ef23-4f33-83f0-ebd9c2ca1d87","Type":"ContainerDied","Data":"95d452a624edafd2bd122f41fd5aa727cd6e26c18fe907864c317e8c55a96864"} Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.645304 4624 scope.go:117] "RemoveContainer" containerID="c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.646023 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.726660 4624 scope.go:117] "RemoveContainer" containerID="42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.758176 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.788021 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817038 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:55:56 crc kubenswrapper[4624]: E0228 03:55:56.817567 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817594 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" Feb 28 03:55:56 crc kubenswrapper[4624]: E0228 03:55:56.817617 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="sg-core" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817625 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="sg-core" Feb 28 03:55:56 crc kubenswrapper[4624]: E0228 03:55:56.817660 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817669 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" Feb 28 03:55:56 crc kubenswrapper[4624]: E0228 03:55:56.817683 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="ceilometer-notification-agent" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817690 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="ceilometer-notification-agent" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817918 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="sg-core" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817933 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" containerName="ceilometer-notification-agent" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817954 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.817964 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae632f24-f74a-413a-9835-599c21020eb5" containerName="barbican-api-log" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.821948 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.837547 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.841791 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.842492 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945554 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-run-httpd\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945602 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945629 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szkb\" (UniqueName: \"kubernetes.io/projected/0825cb79-326c-4d00-84e7-f593eabeb7d7-kube-api-access-4szkb\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945669 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945711 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-scripts\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945755 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-config-data\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:56 crc kubenswrapper[4624]: I0228 03:55:56.945797 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-log-httpd\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.048277 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-run-httpd\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.048923 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.048976 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szkb\" (UniqueName: \"kubernetes.io/projected/0825cb79-326c-4d00-84e7-f593eabeb7d7-kube-api-access-4szkb\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.049075 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.049168 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-scripts\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.049300 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-config-data\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.049416 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-log-httpd\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.048859 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-run-httpd\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.050870 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-log-httpd\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.058576 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.060720 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.060957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-config-data\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.066752 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-scripts\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.089669 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szkb\" (UniqueName: \"kubernetes.io/projected/0825cb79-326c-4d00-84e7-f593eabeb7d7-kube-api-access-4szkb\") pod \"ceilometer-0\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.205580 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.319885 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-775c6bbdc-lvbk6"] Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.322187 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.327547 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.327850 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.330616 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.463652 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-config-data\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.476998 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-etc-swift\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.477241 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-log-httpd\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.477319 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-run-httpd\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.477398 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-internal-tls-certs\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.477435 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ffr\" (UniqueName: \"kubernetes.io/projected/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-kube-api-access-46ffr\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.477562 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-combined-ca-bundle\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.477621 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-public-tls-certs\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.464300 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-775c6bbdc-lvbk6"] Feb 28 03:55:57 crc kubenswrapper[4624]: E0228 03:55:57.525668 4624 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d93d8632f7c166993b99c022bbe04c02269a790c30060bcddeee8f9a90d9063d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d93d8632f7c166993b99c022bbe04c02269a790c30060bcddeee8f9a90d9063d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_2bf41691-ef23-4f33-83f0-ebd9c2ca1d87/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_2bf41691-ef23-4f33-83f0-ebd9c2ca1d87/ceilometer-notification-agent/0.log: no such file or directory Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579522 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-combined-ca-bundle\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579592 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-public-tls-certs\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579653 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-config-data\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579678 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-etc-swift\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579736 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-log-httpd\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579765 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-run-httpd\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579801 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-internal-tls-certs\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.579828 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ffr\" (UniqueName: \"kubernetes.io/projected/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-kube-api-access-46ffr\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.581423 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-log-httpd\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.581676 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-run-httpd\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.591075 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-config-data\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.593664 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-public-tls-certs\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.594935 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-combined-ca-bundle\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.596305 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-etc-swift\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.604814 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-internal-tls-certs\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.606900 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ffr\" (UniqueName: \"kubernetes.io/projected/7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41-kube-api-access-46ffr\") pod \"swift-proxy-775c6bbdc-lvbk6\" (UID: \"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41\") " pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.694503 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerID="0299974817a49a23813b6a4e0541439b33a4eb4cdaf556cd7a52e1dc71f6dd3e" exitCode=0 Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.694938 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b75fb948d-dzc9p" event={"ID":"e5f3e502-df83-4a0e-8240-53d8d6d78a80","Type":"ContainerDied","Data":"0299974817a49a23813b6a4e0541439b33a4eb4cdaf556cd7a52e1dc71f6dd3e"} Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.746208 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:57 crc kubenswrapper[4624]: I0228 03:55:57.973637 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.110730 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bf41691-ef23-4f33-83f0-ebd9c2ca1d87" path="/var/lib/kubelet/pods/2bf41691-ef23-4f33-83f0-ebd9c2ca1d87/volumes" Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.609105 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-775c6bbdc-lvbk6"] Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.792921 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerStarted","Data":"c19921df6df8b0709e2adb556845dab009404866935c004bb04b222e05045769"} Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.800910 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-775c6bbdc-lvbk6" event={"ID":"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41","Type":"ContainerStarted","Data":"33bdd217989eb6ce571ee6b491c27fedc933d92bd5a7ff4dda423fad6f06d9bc"} Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.829541 4624 generic.go:334] "Generic (PLEG): container finished" podID="c42c908a-802d-416b-a7de-066df6e008bd" containerID="8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808" exitCode=137 Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.829873 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75c696b849-456ll" event={"ID":"c42c908a-802d-416b-a7de-066df6e008bd","Type":"ContainerDied","Data":"8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808"} Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.833394 4624 generic.go:334] "Generic (PLEG): container finished" podID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerID="67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204" exitCode=137 Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.833486 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" event={"ID":"32836bdc-f650-4a3a-b1f9-21de1a2992e3","Type":"ContainerDied","Data":"67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204"} Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.870166 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.929665 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data\") pod \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.929982 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32836bdc-f650-4a3a-b1f9-21de1a2992e3-logs\") pod \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.930024 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxsrh\" (UniqueName: \"kubernetes.io/projected/32836bdc-f650-4a3a-b1f9-21de1a2992e3-kube-api-access-lxsrh\") pod \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.930101 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-combined-ca-bundle\") pod \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.930159 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data-custom\") pod \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\" (UID: \"32836bdc-f650-4a3a-b1f9-21de1a2992e3\") " Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.942023 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32836bdc-f650-4a3a-b1f9-21de1a2992e3-logs" (OuterVolumeSpecName: "logs") pod "32836bdc-f650-4a3a-b1f9-21de1a2992e3" (UID: "32836bdc-f650-4a3a-b1f9-21de1a2992e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.943388 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "32836bdc-f650-4a3a-b1f9-21de1a2992e3" (UID: "32836bdc-f650-4a3a-b1f9-21de1a2992e3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:58 crc kubenswrapper[4624]: I0228 03:55:58.981988 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32836bdc-f650-4a3a-b1f9-21de1a2992e3-kube-api-access-lxsrh" (OuterVolumeSpecName: "kube-api-access-lxsrh") pod "32836bdc-f650-4a3a-b1f9-21de1a2992e3" (UID: "32836bdc-f650-4a3a-b1f9-21de1a2992e3"). InnerVolumeSpecName "kube-api-access-lxsrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.021900 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.033991 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32836bdc-f650-4a3a-b1f9-21de1a2992e3-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.036871 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxsrh\" (UniqueName: \"kubernetes.io/projected/32836bdc-f650-4a3a-b1f9-21de1a2992e3-kube-api-access-lxsrh\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.037012 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.105862 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32836bdc-f650-4a3a-b1f9-21de1a2992e3" (UID: "32836bdc-f650-4a3a-b1f9-21de1a2992e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.121824 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data" (OuterVolumeSpecName: "config-data") pod "32836bdc-f650-4a3a-b1f9-21de1a2992e3" (UID: "32836bdc-f650-4a3a-b1f9-21de1a2992e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.139682 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data-custom\") pod \"c42c908a-802d-416b-a7de-066df6e008bd\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.139889 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data\") pod \"c42c908a-802d-416b-a7de-066df6e008bd\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.139956 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fzm\" (UniqueName: \"kubernetes.io/projected/c42c908a-802d-416b-a7de-066df6e008bd-kube-api-access-q4fzm\") pod \"c42c908a-802d-416b-a7de-066df6e008bd\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.139991 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-combined-ca-bundle\") pod \"c42c908a-802d-416b-a7de-066df6e008bd\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.140038 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42c908a-802d-416b-a7de-066df6e008bd-logs\") pod \"c42c908a-802d-416b-a7de-066df6e008bd\" (UID: \"c42c908a-802d-416b-a7de-066df6e008bd\") " Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.140935 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.140955 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32836bdc-f650-4a3a-b1f9-21de1a2992e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.143944 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42c908a-802d-416b-a7de-066df6e008bd-logs" (OuterVolumeSpecName: "logs") pod "c42c908a-802d-416b-a7de-066df6e008bd" (UID: "c42c908a-802d-416b-a7de-066df6e008bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.155965 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c42c908a-802d-416b-a7de-066df6e008bd" (UID: "c42c908a-802d-416b-a7de-066df6e008bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.178500 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42c908a-802d-416b-a7de-066df6e008bd-kube-api-access-q4fzm" (OuterVolumeSpecName: "kube-api-access-q4fzm") pod "c42c908a-802d-416b-a7de-066df6e008bd" (UID: "c42c908a-802d-416b-a7de-066df6e008bd"). InnerVolumeSpecName "kube-api-access-q4fzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.224581 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c42c908a-802d-416b-a7de-066df6e008bd" (UID: "c42c908a-802d-416b-a7de-066df6e008bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.245065 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.245120 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4fzm\" (UniqueName: \"kubernetes.io/projected/c42c908a-802d-416b-a7de-066df6e008bd-kube-api-access-q4fzm\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.245134 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.245142 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c42c908a-802d-416b-a7de-066df6e008bd-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.273939 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data" (OuterVolumeSpecName: "config-data") pod "c42c908a-802d-416b-a7de-066df6e008bd" (UID: "c42c908a-802d-416b-a7de-066df6e008bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.347006 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c908a-802d-416b-a7de-066df6e008bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.854581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" event={"ID":"32836bdc-f650-4a3a-b1f9-21de1a2992e3","Type":"ContainerDied","Data":"b7ba5564aa1c99b564d1863d34fdec3dc70b4d1b2faad3c8cd4c226050a2a62f"} Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.855094 4624 scope.go:117] "RemoveContainer" containerID="67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.854855 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7cb456bd-7zslx" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.875470 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerStarted","Data":"4b3220fdbdc85c417d12b1c844f17869c42ffa3848bff9294a1c23da780e0ce6"} Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.875546 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerStarted","Data":"9c14ff5bcc5023ffe160009d9bb9ef10feb87d90a735836b900371e53b386a1c"} Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.882139 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-775c6bbdc-lvbk6" event={"ID":"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41","Type":"ContainerStarted","Data":"1c1c0b06ecd68fe0fe0fa12dca1d0c8a6816beb9e17352c9305ed397b2683ebd"} Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.882170 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-775c6bbdc-lvbk6" event={"ID":"7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41","Type":"ContainerStarted","Data":"9446b04ccd118b40e0b9c08779e93cbcb94df319cffe0a067dddc75e7bf4232b"} Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.882395 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.882446 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.889425 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75c696b849-456ll" event={"ID":"c42c908a-802d-416b-a7de-066df6e008bd","Type":"ContainerDied","Data":"9c93ce3ca8046cf05b88d534c322db9695835608ff666e0e07af9539a8919bc9"} Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.889524 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75c696b849-456ll" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.893656 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-f7cb456bd-7zslx"] Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.910746 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-f7cb456bd-7zslx"] Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.915688 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-775c6bbdc-lvbk6" podStartSLOduration=2.91564259 podStartE2EDuration="2.91564259s" podCreationTimestamp="2026-02-28 03:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:55:59.903768627 +0000 UTC m=+1214.567807936" watchObservedRunningTime="2026-02-28 03:55:59.91564259 +0000 UTC m=+1214.579681889" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.924262 4624 scope.go:117] "RemoveContainer" containerID="8c63bf156015f7c145e8e225f906f2ed92cbe5d65c471bb91f916daccf77b6eb" Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.942155 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-75c696b849-456ll"] Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.963802 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-75c696b849-456ll"] Feb 28 03:55:59 crc kubenswrapper[4624]: I0228 03:55:59.997362 4624 scope.go:117] "RemoveContainer" containerID="8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.023458 4624 scope.go:117] "RemoveContainer" containerID="d2ca66697571e5f8206e57c11167ef607f623a8688e20234c1cbcba16ceb5a32" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.102643 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" path="/var/lib/kubelet/pods/32836bdc-f650-4a3a-b1f9-21de1a2992e3/volumes" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.103358 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42c908a-802d-416b-a7de-066df6e008bd" path="/var/lib/kubelet/pods/c42c908a-802d-416b-a7de-066df6e008bd/volumes" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.160548 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537516-ggl8l"] Feb 28 03:56:00 crc kubenswrapper[4624]: E0228 03:56:00.161225 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161254 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener" Feb 28 03:56:00 crc kubenswrapper[4624]: E0228 03:56:00.161275 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker-log" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161285 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker-log" Feb 28 03:56:00 crc kubenswrapper[4624]: E0228 03:56:00.161318 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener-log" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161329 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener-log" Feb 28 03:56:00 crc kubenswrapper[4624]: E0228 03:56:00.161354 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161364 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161643 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161684 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="32836bdc-f650-4a3a-b1f9-21de1a2992e3" containerName="barbican-keystone-listener-log" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161702 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker-log" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.161715 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c908a-802d-416b-a7de-066df6e008bd" containerName="barbican-worker" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.162778 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.167182 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.167377 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.167437 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.168367 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537516-ggl8l"] Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.271849 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rpn2\" (UniqueName: \"kubernetes.io/projected/c57d1ad8-41f4-45c4-8823-9b854dcf073e-kube-api-access-6rpn2\") pod \"auto-csr-approver-29537516-ggl8l\" (UID: \"c57d1ad8-41f4-45c4-8823-9b854dcf073e\") " pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.377388 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rpn2\" (UniqueName: \"kubernetes.io/projected/c57d1ad8-41f4-45c4-8823-9b854dcf073e-kube-api-access-6rpn2\") pod \"auto-csr-approver-29537516-ggl8l\" (UID: \"c57d1ad8-41f4-45c4-8823-9b854dcf073e\") " pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.396894 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rpn2\" (UniqueName: \"kubernetes.io/projected/c57d1ad8-41f4-45c4-8823-9b854dcf073e-kube-api-access-6rpn2\") pod \"auto-csr-approver-29537516-ggl8l\" (UID: \"c57d1ad8-41f4-45c4-8823-9b854dcf073e\") " pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.489923 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.670285 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.765319 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-t94lj"] Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.766233 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" podUID="030a5f79-331e-4d94-98e9-67ebca169648" containerName="dnsmasq-dns" containerID="cri-o://af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75" gracePeriod=10 Feb 28 03:56:00 crc kubenswrapper[4624]: I0228 03:56:00.952333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerStarted","Data":"8389c7bb6043647b23edadeecdf12f91902fa803ffdf95250f30bfcc0b1e103b"} Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.256128 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.358158 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.427784 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537516-ggl8l"] Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.652815 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.773110 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-sb\") pod \"030a5f79-331e-4d94-98e9-67ebca169648\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.773213 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-svc\") pod \"030a5f79-331e-4d94-98e9-67ebca169648\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.773284 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-swift-storage-0\") pod \"030a5f79-331e-4d94-98e9-67ebca169648\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.773379 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-config\") pod \"030a5f79-331e-4d94-98e9-67ebca169648\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.773409 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wckgm\" (UniqueName: \"kubernetes.io/projected/030a5f79-331e-4d94-98e9-67ebca169648-kube-api-access-wckgm\") pod \"030a5f79-331e-4d94-98e9-67ebca169648\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.773624 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-nb\") pod \"030a5f79-331e-4d94-98e9-67ebca169648\" (UID: \"030a5f79-331e-4d94-98e9-67ebca169648\") " Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.836502 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030a5f79-331e-4d94-98e9-67ebca169648-kube-api-access-wckgm" (OuterVolumeSpecName: "kube-api-access-wckgm") pod "030a5f79-331e-4d94-98e9-67ebca169648" (UID: "030a5f79-331e-4d94-98e9-67ebca169648"). InnerVolumeSpecName "kube-api-access-wckgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.876211 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wckgm\" (UniqueName: \"kubernetes.io/projected/030a5f79-331e-4d94-98e9-67ebca169648-kube-api-access-wckgm\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.882760 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-config" (OuterVolumeSpecName: "config") pod "030a5f79-331e-4d94-98e9-67ebca169648" (UID: "030a5f79-331e-4d94-98e9-67ebca169648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.887185 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "030a5f79-331e-4d94-98e9-67ebca169648" (UID: "030a5f79-331e-4d94-98e9-67ebca169648"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.920015 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "030a5f79-331e-4d94-98e9-67ebca169648" (UID: "030a5f79-331e-4d94-98e9-67ebca169648"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.973852 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" event={"ID":"c57d1ad8-41f4-45c4-8823-9b854dcf073e","Type":"ContainerStarted","Data":"cfe1eb2c5c8d948156d515d3e8ce2accd70123a3957c06bd50da76e41da3b22a"} Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.978508 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.978543 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.978552 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.982785 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "030a5f79-331e-4d94-98e9-67ebca169648" (UID: "030a5f79-331e-4d94-98e9-67ebca169648"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.982970 4624 generic.go:334] "Generic (PLEG): container finished" podID="030a5f79-331e-4d94-98e9-67ebca169648" containerID="af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75" exitCode=0 Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.983221 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="cinder-scheduler" containerID="cri-o://5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61" gracePeriod=30 Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.983329 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.983836 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" event={"ID":"030a5f79-331e-4d94-98e9-67ebca169648","Type":"ContainerDied","Data":"af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75"} Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.983863 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-t94lj" event={"ID":"030a5f79-331e-4d94-98e9-67ebca169648","Type":"ContainerDied","Data":"46d628c29cf79d07f54bf2a8ddf75b40dc6fda0600f62934492f458ccc08a464"} Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.983881 4624 scope.go:117] "RemoveContainer" containerID="af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75" Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.984358 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="probe" containerID="cri-o://fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a" gracePeriod=30 Feb 28 03:56:01 crc kubenswrapper[4624]: I0228 03:56:01.997918 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "030a5f79-331e-4d94-98e9-67ebca169648" (UID: "030a5f79-331e-4d94-98e9-67ebca169648"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.064452 4624 scope.go:117] "RemoveContainer" containerID="5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.080296 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.080328 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030a5f79-331e-4d94-98e9-67ebca169648-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.155400 4624 scope.go:117] "RemoveContainer" containerID="af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75" Feb 28 03:56:02 crc kubenswrapper[4624]: E0228 03:56:02.159202 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75\": container with ID starting with af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75 not found: ID does not exist" containerID="af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.159249 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75"} err="failed to get container status \"af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75\": rpc error: code = NotFound desc = could not find container \"af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75\": container with ID starting with af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75 not found: ID does not exist" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.159274 4624 scope.go:117] "RemoveContainer" containerID="5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f" Feb 28 03:56:02 crc kubenswrapper[4624]: E0228 03:56:02.159578 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f\": container with ID starting with 5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f not found: ID does not exist" containerID="5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.159621 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f"} err="failed to get container status \"5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f\": rpc error: code = NotFound desc = could not find container \"5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f\": container with ID starting with 5ca253a22f0199cb435c6f2aba59edde0a7927e5306290816934628ccdfdac0f not found: ID does not exist" Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.324217 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-t94lj"] Feb 28 03:56:02 crc kubenswrapper[4624]: I0228 03:56:02.341544 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-t94lj"] Feb 28 03:56:03 crc kubenswrapper[4624]: I0228 03:56:03.011298 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerID="f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072" exitCode=0 Feb 28 03:56:03 crc kubenswrapper[4624]: I0228 03:56:03.011518 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b75fb948d-dzc9p" event={"ID":"e5f3e502-df83-4a0e-8240-53d8d6d78a80","Type":"ContainerDied","Data":"f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072"} Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.034034 4624 generic.go:334] "Generic (PLEG): container finished" podID="3388a3ea-259d-4648-a047-3f9c896f8264" containerID="fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a" exitCode=0 Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.034698 4624 generic.go:334] "Generic (PLEG): container finished" podID="3388a3ea-259d-4648-a047-3f9c896f8264" containerID="5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61" exitCode=0 Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.034748 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3388a3ea-259d-4648-a047-3f9c896f8264","Type":"ContainerDied","Data":"fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a"} Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.034783 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3388a3ea-259d-4648-a047-3f9c896f8264","Type":"ContainerDied","Data":"5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61"} Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.038680 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" event={"ID":"c57d1ad8-41f4-45c4-8823-9b854dcf073e","Type":"ContainerStarted","Data":"8ed4b96d3f0d604124aec8a4cd287291c1c0d013b02a3f4b74063ff10da4024c"} Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.050689 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerStarted","Data":"21657615dd78cbbb232cdb6fe89f351ebdfe6bf9751dc4146822e188c39c42cf"} Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.051961 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.072732 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" podStartSLOduration=2.9973102259999997 podStartE2EDuration="4.072708425s" podCreationTimestamp="2026-02-28 03:56:00 +0000 UTC" firstStartedPulling="2026-02-28 03:56:01.499950528 +0000 UTC m=+1216.163989837" lastFinishedPulling="2026-02-28 03:56:02.575348717 +0000 UTC m=+1217.239388036" observedRunningTime="2026-02-28 03:56:04.052803344 +0000 UTC m=+1218.716842743" watchObservedRunningTime="2026-02-28 03:56:04.072708425 +0000 UTC m=+1218.736747734" Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.079480 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.711490056 podStartE2EDuration="8.079462958s" podCreationTimestamp="2026-02-28 03:55:56 +0000 UTC" firstStartedPulling="2026-02-28 03:55:57.968468129 +0000 UTC m=+1212.632507438" lastFinishedPulling="2026-02-28 03:56:02.336441031 +0000 UTC m=+1217.000480340" observedRunningTime="2026-02-28 03:56:04.07546537 +0000 UTC m=+1218.739504679" watchObservedRunningTime="2026-02-28 03:56:04.079462958 +0000 UTC m=+1218.743502267" Feb 28 03:56:04 crc kubenswrapper[4624]: I0228 03:56:04.106190 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030a5f79-331e-4d94-98e9-67ebca169648" path="/var/lib/kubelet/pods/030a5f79-331e-4d94-98e9-67ebca169648/volumes" Feb 28 03:56:05 crc kubenswrapper[4624]: I0228 03:56:05.104815 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerID="0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163" exitCode=137 Feb 28 03:56:05 crc kubenswrapper[4624]: I0228 03:56:05.105756 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerDied","Data":"0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163"} Feb 28 03:56:05 crc kubenswrapper[4624]: I0228 03:56:05.115760 4624 generic.go:334] "Generic (PLEG): container finished" podID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerID="5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43" exitCode=137 Feb 28 03:56:05 crc kubenswrapper[4624]: I0228 03:56:05.115888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerDied","Data":"5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43"} Feb 28 03:56:05 crc kubenswrapper[4624]: I0228 03:56:05.657455 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 28 03:56:06 crc kubenswrapper[4624]: I0228 03:56:06.133836 4624 generic.go:334] "Generic (PLEG): container finished" podID="c57d1ad8-41f4-45c4-8823-9b854dcf073e" containerID="8ed4b96d3f0d604124aec8a4cd287291c1c0d013b02a3f4b74063ff10da4024c" exitCode=0 Feb 28 03:56:06 crc kubenswrapper[4624]: I0228 03:56:06.135193 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" event={"ID":"c57d1ad8-41f4-45c4-8823-9b854dcf073e","Type":"ContainerDied","Data":"8ed4b96d3f0d604124aec8a4cd287291c1c0d013b02a3f4b74063ff10da4024c"} Feb 28 03:56:06 crc kubenswrapper[4624]: I0228 03:56:06.553766 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:07 crc kubenswrapper[4624]: I0228 03:56:07.151142 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-central-agent" containerID="cri-o://9c14ff5bcc5023ffe160009d9bb9ef10feb87d90a735836b900371e53b386a1c" gracePeriod=30 Feb 28 03:56:07 crc kubenswrapper[4624]: I0228 03:56:07.151246 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="proxy-httpd" containerID="cri-o://21657615dd78cbbb232cdb6fe89f351ebdfe6bf9751dc4146822e188c39c42cf" gracePeriod=30 Feb 28 03:56:07 crc kubenswrapper[4624]: I0228 03:56:07.151282 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="sg-core" containerID="cri-o://8389c7bb6043647b23edadeecdf12f91902fa803ffdf95250f30bfcc0b1e103b" gracePeriod=30 Feb 28 03:56:07 crc kubenswrapper[4624]: I0228 03:56:07.151318 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-notification-agent" containerID="cri-o://4b3220fdbdc85c417d12b1c844f17869c42ffa3848bff9294a1c23da780e0ce6" gracePeriod=30 Feb 28 03:56:07 crc kubenswrapper[4624]: I0228 03:56:07.755617 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:56:07 crc kubenswrapper[4624]: I0228 03:56:07.756465 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-775c6bbdc-lvbk6" Feb 28 03:56:08 crc kubenswrapper[4624]: I0228 03:56:08.204604 4624 generic.go:334] "Generic (PLEG): container finished" podID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerID="21657615dd78cbbb232cdb6fe89f351ebdfe6bf9751dc4146822e188c39c42cf" exitCode=0 Feb 28 03:56:08 crc kubenswrapper[4624]: I0228 03:56:08.204646 4624 generic.go:334] "Generic (PLEG): container finished" podID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerID="8389c7bb6043647b23edadeecdf12f91902fa803ffdf95250f30bfcc0b1e103b" exitCode=2 Feb 28 03:56:08 crc kubenswrapper[4624]: I0228 03:56:08.204655 4624 generic.go:334] "Generic (PLEG): container finished" podID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerID="4b3220fdbdc85c417d12b1c844f17869c42ffa3848bff9294a1c23da780e0ce6" exitCode=0 Feb 28 03:56:08 crc kubenswrapper[4624]: I0228 03:56:08.204718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerDied","Data":"21657615dd78cbbb232cdb6fe89f351ebdfe6bf9751dc4146822e188c39c42cf"} Feb 28 03:56:08 crc kubenswrapper[4624]: I0228 03:56:08.204786 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerDied","Data":"8389c7bb6043647b23edadeecdf12f91902fa803ffdf95250f30bfcc0b1e103b"} Feb 28 03:56:08 crc kubenswrapper[4624]: I0228 03:56:08.204838 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerDied","Data":"4b3220fdbdc85c417d12b1c844f17869c42ffa3848bff9294a1c23da780e0ce6"} Feb 28 03:56:09 crc kubenswrapper[4624]: I0228 03:56:09.221443 4624 generic.go:334] "Generic (PLEG): container finished" podID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerID="9c14ff5bcc5023ffe160009d9bb9ef10feb87d90a735836b900371e53b386a1c" exitCode=0 Feb 28 03:56:09 crc kubenswrapper[4624]: I0228 03:56:09.221499 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerDied","Data":"9c14ff5bcc5023ffe160009d9bb9ef10feb87d90a735836b900371e53b386a1c"} Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.078872 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-t8gp6"] Feb 28 03:56:11 crc kubenswrapper[4624]: E0228 03:56:11.079815 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030a5f79-331e-4d94-98e9-67ebca169648" containerName="init" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.079836 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="030a5f79-331e-4d94-98e9-67ebca169648" containerName="init" Feb 28 03:56:11 crc kubenswrapper[4624]: E0228 03:56:11.079861 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030a5f79-331e-4d94-98e9-67ebca169648" containerName="dnsmasq-dns" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.079872 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="030a5f79-331e-4d94-98e9-67ebca169648" containerName="dnsmasq-dns" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.080160 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="030a5f79-331e-4d94-98e9-67ebca169648" containerName="dnsmasq-dns" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.081128 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.100117 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-t8gp6"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.146105 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jz2\" (UniqueName: \"kubernetes.io/projected/59f46a0e-807d-4856-bb08-878dc3d19728-kube-api-access-79jz2\") pod \"nova-api-db-create-t8gp6\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.146288 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46a0e-807d-4856-bb08-878dc3d19728-operator-scripts\") pod \"nova-api-db-create-t8gp6\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.247948 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46a0e-807d-4856-bb08-878dc3d19728-operator-scripts\") pod \"nova-api-db-create-t8gp6\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.248024 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79jz2\" (UniqueName: \"kubernetes.io/projected/59f46a0e-807d-4856-bb08-878dc3d19728-kube-api-access-79jz2\") pod \"nova-api-db-create-t8gp6\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.249562 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46a0e-807d-4856-bb08-878dc3d19728-operator-scripts\") pod \"nova-api-db-create-t8gp6\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.277898 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79jz2\" (UniqueName: \"kubernetes.io/projected/59f46a0e-807d-4856-bb08-878dc3d19728-kube-api-access-79jz2\") pod \"nova-api-db-create-t8gp6\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.308467 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qkq46"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.313546 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.335891 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-17d0-account-create-update-gv99f"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.337629 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.348677 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.391393 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qkq46"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.404625 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.451133 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sn7\" (UniqueName: \"kubernetes.io/projected/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-kube-api-access-b9sn7\") pod \"nova-api-17d0-account-create-update-gv99f\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.451237 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-operator-scripts\") pod \"nova-api-17d0-account-create-update-gv99f\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.451265 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjx88\" (UniqueName: \"kubernetes.io/projected/5506c57e-14c0-4fca-88ba-09db2ac80047-kube-api-access-gjx88\") pod \"nova-cell0-db-create-qkq46\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.451357 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5506c57e-14c0-4fca-88ba-09db2ac80047-operator-scripts\") pod \"nova-cell0-db-create-qkq46\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.452710 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-17d0-account-create-update-gv99f"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.480162 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mj64h"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.481927 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.486112 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mj64h"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.540167 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9cfa-account-create-update-w9qh7"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.541757 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.550279 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.553015 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5506c57e-14c0-4fca-88ba-09db2ac80047-operator-scripts\") pod \"nova-cell0-db-create-qkq46\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.553101 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9sn7\" (UniqueName: \"kubernetes.io/projected/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-kube-api-access-b9sn7\") pod \"nova-api-17d0-account-create-update-gv99f\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.553135 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-operator-scripts\") pod \"nova-cell1-db-create-mj64h\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.553189 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-operator-scripts\") pod \"nova-api-17d0-account-create-update-gv99f\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.553213 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjx88\" (UniqueName: \"kubernetes.io/projected/5506c57e-14c0-4fca-88ba-09db2ac80047-kube-api-access-gjx88\") pod \"nova-cell0-db-create-qkq46\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.553245 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwps8\" (UniqueName: \"kubernetes.io/projected/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-kube-api-access-zwps8\") pod \"nova-cell1-db-create-mj64h\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.554453 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5506c57e-14c0-4fca-88ba-09db2ac80047-operator-scripts\") pod \"nova-cell0-db-create-qkq46\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.554509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-operator-scripts\") pod \"nova-api-17d0-account-create-update-gv99f\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.581446 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9cfa-account-create-update-w9qh7"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.592481 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjx88\" (UniqueName: \"kubernetes.io/projected/5506c57e-14c0-4fca-88ba-09db2ac80047-kube-api-access-gjx88\") pod \"nova-cell0-db-create-qkq46\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.600536 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9sn7\" (UniqueName: \"kubernetes.io/projected/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-kube-api-access-b9sn7\") pod \"nova-api-17d0-account-create-update-gv99f\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.651848 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.661544 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbgg5\" (UniqueName: \"kubernetes.io/projected/d3628d61-3ed3-4dc8-b649-9748be42d073-kube-api-access-vbgg5\") pod \"nova-cell0-9cfa-account-create-update-w9qh7\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.661825 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-operator-scripts\") pod \"nova-cell1-db-create-mj64h\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.661978 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3628d61-3ed3-4dc8-b649-9748be42d073-operator-scripts\") pod \"nova-cell0-9cfa-account-create-update-w9qh7\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.662068 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwps8\" (UniqueName: \"kubernetes.io/projected/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-kube-api-access-zwps8\") pod \"nova-cell1-db-create-mj64h\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.663453 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-operator-scripts\") pod \"nova-cell1-db-create-mj64h\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.669813 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.746763 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwps8\" (UniqueName: \"kubernetes.io/projected/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-kube-api-access-zwps8\") pod \"nova-cell1-db-create-mj64h\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.777101 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbgg5\" (UniqueName: \"kubernetes.io/projected/d3628d61-3ed3-4dc8-b649-9748be42d073-kube-api-access-vbgg5\") pod \"nova-cell0-9cfa-account-create-update-w9qh7\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.777686 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3628d61-3ed3-4dc8-b649-9748be42d073-operator-scripts\") pod \"nova-cell0-9cfa-account-create-update-w9qh7\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.778560 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3628d61-3ed3-4dc8-b649-9748be42d073-operator-scripts\") pod \"nova-cell0-9cfa-account-create-update-w9qh7\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.780382 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-024b-account-create-update-gknfh"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.784309 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.786702 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.790774 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-024b-account-create-update-gknfh"] Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.806577 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbgg5\" (UniqueName: \"kubernetes.io/projected/d3628d61-3ed3-4dc8-b649-9748be42d073-kube-api-access-vbgg5\") pod \"nova-cell0-9cfa-account-create-update-w9qh7\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.823459 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.859399 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.878493 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtcg\" (UniqueName: \"kubernetes.io/projected/5072950d-2ee4-439c-ade4-63802cc55a48-kube-api-access-dhtcg\") pod \"nova-cell1-024b-account-create-update-gknfh\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.878598 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072950d-2ee4-439c-ade4-63802cc55a48-operator-scripts\") pod \"nova-cell1-024b-account-create-update-gknfh\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.981367 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhtcg\" (UniqueName: \"kubernetes.io/projected/5072950d-2ee4-439c-ade4-63802cc55a48-kube-api-access-dhtcg\") pod \"nova-cell1-024b-account-create-update-gknfh\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.981532 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072950d-2ee4-439c-ade4-63802cc55a48-operator-scripts\") pod \"nova-cell1-024b-account-create-update-gknfh\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:11 crc kubenswrapper[4624]: I0228 03:56:11.983115 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072950d-2ee4-439c-ade4-63802cc55a48-operator-scripts\") pod \"nova-cell1-024b-account-create-update-gknfh\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:12 crc kubenswrapper[4624]: I0228 03:56:12.000779 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhtcg\" (UniqueName: \"kubernetes.io/projected/5072950d-2ee4-439c-ade4-63802cc55a48-kube-api-access-dhtcg\") pod \"nova-cell1-024b-account-create-update-gknfh\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:12 crc kubenswrapper[4624]: I0228 03:56:12.150807 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.110701 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.147785 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.207680 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rpn2\" (UniqueName: \"kubernetes.io/projected/c57d1ad8-41f4-45c4-8823-9b854dcf073e-kube-api-access-6rpn2\") pod \"c57d1ad8-41f4-45c4-8823-9b854dcf073e\" (UID: \"c57d1ad8-41f4-45c4-8823-9b854dcf073e\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.222429 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57d1ad8-41f4-45c4-8823-9b854dcf073e-kube-api-access-6rpn2" (OuterVolumeSpecName: "kube-api-access-6rpn2") pod "c57d1ad8-41f4-45c4-8823-9b854dcf073e" (UID: "c57d1ad8-41f4-45c4-8823-9b854dcf073e"). InnerVolumeSpecName "kube-api-access-6rpn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.312818 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-config\") pod \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.313037 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5thz\" (UniqueName: \"kubernetes.io/projected/e5f3e502-df83-4a0e-8240-53d8d6d78a80-kube-api-access-p5thz\") pod \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.313058 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-httpd-config\") pod \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.313208 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-combined-ca-bundle\") pod \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.313234 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-ovndb-tls-certs\") pod \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\" (UID: \"e5f3e502-df83-4a0e-8240-53d8d6d78a80\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.313638 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rpn2\" (UniqueName: \"kubernetes.io/projected/c57d1ad8-41f4-45c4-8823-9b854dcf073e-kube-api-access-6rpn2\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.329524 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f3e502-df83-4a0e-8240-53d8d6d78a80-kube-api-access-p5thz" (OuterVolumeSpecName: "kube-api-access-p5thz") pod "e5f3e502-df83-4a0e-8240-53d8d6d78a80" (UID: "e5f3e502-df83-4a0e-8240-53d8d6d78a80"). InnerVolumeSpecName "kube-api-access-p5thz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.365415 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e5f3e502-df83-4a0e-8240-53d8d6d78a80" (UID: "e5f3e502-df83-4a0e-8240-53d8d6d78a80"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.426732 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b75fb948d-dzc9p" event={"ID":"e5f3e502-df83-4a0e-8240-53d8d6d78a80","Type":"ContainerDied","Data":"9fefb793d8a707565be6a2404b1be2a72d324759c12bd748a4ad66d254dd6c43"} Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.426795 4624 scope.go:117] "RemoveContainer" containerID="0299974817a49a23813b6a4e0541439b33a4eb4cdaf556cd7a52e1dc71f6dd3e" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.426968 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b75fb948d-dzc9p" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.458380 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5thz\" (UniqueName: \"kubernetes.io/projected/e5f3e502-df83-4a0e-8240-53d8d6d78a80-kube-api-access-p5thz\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.458418 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.607530 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" event={"ID":"c57d1ad8-41f4-45c4-8823-9b854dcf073e","Type":"ContainerDied","Data":"cfe1eb2c5c8d948156d515d3e8ce2accd70123a3957c06bd50da76e41da3b22a"} Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.607905 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe1eb2c5c8d948156d515d3e8ce2accd70123a3957c06bd50da76e41da3b22a" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.607870 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537516-ggl8l" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.688850 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-config" (OuterVolumeSpecName: "config") pod "e5f3e502-df83-4a0e-8240-53d8d6d78a80" (UID: "e5f3e502-df83-4a0e-8240-53d8d6d78a80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.720002 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5f3e502-df83-4a0e-8240-53d8d6d78a80" (UID: "e5f3e502-df83-4a0e-8240-53d8d6d78a80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.732848 4624 scope.go:117] "RemoveContainer" containerID="f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.776334 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.779668 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.779710 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e5f3e502-df83-4a0e-8240-53d8d6d78a80" (UID: "e5f3e502-df83-4a0e-8240-53d8d6d78a80"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.798757 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.886495 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-combined-ca-bundle\") pod \"3388a3ea-259d-4648-a047-3f9c896f8264\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.887006 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data\") pod \"3388a3ea-259d-4648-a047-3f9c896f8264\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.887048 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3388a3ea-259d-4648-a047-3f9c896f8264-etc-machine-id\") pod \"3388a3ea-259d-4648-a047-3f9c896f8264\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.887080 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxw6c\" (UniqueName: \"kubernetes.io/projected/3388a3ea-259d-4648-a047-3f9c896f8264-kube-api-access-rxw6c\") pod \"3388a3ea-259d-4648-a047-3f9c896f8264\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.887161 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-scripts\") pod \"3388a3ea-259d-4648-a047-3f9c896f8264\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.887259 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data-custom\") pod \"3388a3ea-259d-4648-a047-3f9c896f8264\" (UID: \"3388a3ea-259d-4648-a047-3f9c896f8264\") " Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.887811 4624 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f3e502-df83-4a0e-8240-53d8d6d78a80-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.890325 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3388a3ea-259d-4648-a047-3f9c896f8264-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3388a3ea-259d-4648-a047-3f9c896f8264" (UID: "3388a3ea-259d-4648-a047-3f9c896f8264"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.901181 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3388a3ea-259d-4648-a047-3f9c896f8264" (UID: "3388a3ea-259d-4648-a047-3f9c896f8264"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.901260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-scripts" (OuterVolumeSpecName: "scripts") pod "3388a3ea-259d-4648-a047-3f9c896f8264" (UID: "3388a3ea-259d-4648-a047-3f9c896f8264"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.903407 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3388a3ea-259d-4648-a047-3f9c896f8264-kube-api-access-rxw6c" (OuterVolumeSpecName: "kube-api-access-rxw6c") pod "3388a3ea-259d-4648-a047-3f9c896f8264" (UID: "3388a3ea-259d-4648-a047-3f9c896f8264"). InnerVolumeSpecName "kube-api-access-rxw6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.990500 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxw6c\" (UniqueName: \"kubernetes.io/projected/3388a3ea-259d-4648-a047-3f9c896f8264-kube-api-access-rxw6c\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.990527 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.990536 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.990545 4624 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3388a3ea-259d-4648-a047-3f9c896f8264-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:13 crc kubenswrapper[4624]: I0228 03:56:13.997979 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3388a3ea-259d-4648-a047-3f9c896f8264" (UID: "3388a3ea-259d-4648-a047-3f9c896f8264"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.045733 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.118322 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.145134 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b75fb948d-dzc9p"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.163341 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b75fb948d-dzc9p"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.220739 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-sg-core-conf-yaml\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.220787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-combined-ca-bundle\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.220904 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-run-httpd\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.220931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-log-httpd\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.221028 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-scripts\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.221361 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-config-data\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.221436 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4szkb\" (UniqueName: \"kubernetes.io/projected/0825cb79-326c-4d00-84e7-f593eabeb7d7-kube-api-access-4szkb\") pod \"0825cb79-326c-4d00-84e7-f593eabeb7d7\" (UID: \"0825cb79-326c-4d00-84e7-f593eabeb7d7\") " Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.225952 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.226644 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.281027 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0825cb79-326c-4d00-84e7-f593eabeb7d7-kube-api-access-4szkb" (OuterVolumeSpecName: "kube-api-access-4szkb") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "kube-api-access-4szkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.290748 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-scripts" (OuterVolumeSpecName: "scripts") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.299436 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537510-46lp8"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.317696 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537510-46lp8"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.324263 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4szkb\" (UniqueName: \"kubernetes.io/projected/0825cb79-326c-4d00-84e7-f593eabeb7d7-kube-api-access-4szkb\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.324320 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.324334 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0825cb79-326c-4d00-84e7-f593eabeb7d7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.324347 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.359441 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data" (OuterVolumeSpecName: "config-data") pod "3388a3ea-259d-4648-a047-3f9c896f8264" (UID: "3388a3ea-259d-4648-a047-3f9c896f8264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.431509 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3388a3ea-259d-4648-a047-3f9c896f8264-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.465353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.547508 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.570469 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.628495 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-024b-account-create-update-gknfh"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.662090 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.667831 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-17d0-account-create-update-gv99f"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.742308 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerStarted","Data":"940754d093426155a0bfd9f597844251410b1b7303859424203ebbb0061de2e3"} Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.779137 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0825cb79-326c-4d00-84e7-f593eabeb7d7","Type":"ContainerDied","Data":"c19921df6df8b0709e2adb556845dab009404866935c004bb04b222e05045769"} Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.787104 4624 scope.go:117] "RemoveContainer" containerID="21657615dd78cbbb232cdb6fe89f351ebdfe6bf9751dc4146822e188c39c42cf" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.785340 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.830795 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerStarted","Data":"7f2e0d50f88199a063c27e79239b0538235bbf6111c30dfd8d0c32d33e145b7c"} Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.845914 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-t8gp6"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.855998 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3388a3ea-259d-4648-a047-3f9c896f8264","Type":"ContainerDied","Data":"b91f7c15122c4ee25b3d1b8cdeba20448e8ad2ab19595702f1c2179e3e249786"} Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.856288 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.910045 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mj64h"] Feb 28 03:56:14 crc kubenswrapper[4624]: I0228 03:56:14.953703 4624 scope.go:117] "RemoveContainer" containerID="8389c7bb6043647b23edadeecdf12f91902fa803ffdf95250f30bfcc0b1e103b" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.013847 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.038302 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.042266 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-config-data" (OuterVolumeSpecName: "config-data") pod "0825cb79-326c-4d00-84e7-f593eabeb7d7" (UID: "0825cb79-326c-4d00-84e7-f593eabeb7d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052389 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052792 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-httpd" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052809 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-httpd" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052825 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="proxy-httpd" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052833 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="proxy-httpd" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052846 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-central-agent" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052854 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-central-agent" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052869 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="probe" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052875 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="probe" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052886 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="sg-core" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052892 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="sg-core" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052905 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-api" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052912 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-api" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052921 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57d1ad8-41f4-45c4-8823-9b854dcf073e" containerName="oc" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052928 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57d1ad8-41f4-45c4-8823-9b854dcf073e" containerName="oc" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052954 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-notification-agent" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.052967 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-notification-agent" Feb 28 03:56:15 crc kubenswrapper[4624]: E0228 03:56:15.052993 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="cinder-scheduler" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053004 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="cinder-scheduler" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053191 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="proxy-httpd" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053204 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-central-agent" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053218 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="cinder-scheduler" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053227 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-httpd" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053240 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" containerName="probe" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053250 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57d1ad8-41f4-45c4-8823-9b854dcf073e" containerName="oc" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053262 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" containerName="neutron-api" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053270 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="ceilometer-notification-agent" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.053279 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" containerName="sg-core" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.054307 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.059466 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.080901 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.088502 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.797531844 podStartE2EDuration="26.088479962s" podCreationTimestamp="2026-02-28 03:55:49 +0000 UTC" firstStartedPulling="2026-02-28 03:55:51.117529278 +0000 UTC m=+1205.781568587" lastFinishedPulling="2026-02-28 03:56:13.408477396 +0000 UTC m=+1228.072516705" observedRunningTime="2026-02-28 03:56:14.961596108 +0000 UTC m=+1229.625635417" watchObservedRunningTime="2026-02-28 03:56:15.088479962 +0000 UTC m=+1229.752519271" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.114633 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9cfa-account-create-update-w9qh7"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.119145 4624 scope.go:117] "RemoveContainer" containerID="4b3220fdbdc85c417d12b1c844f17869c42ffa3848bff9294a1c23da780e0ce6" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121035 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d77d2859-7ba3-4a5a-b2e2-536e824afade-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121101 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121258 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwskk\" (UniqueName: \"kubernetes.io/projected/d77d2859-7ba3-4a5a-b2e2-536e824afade-kube-api-access-rwskk\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121293 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-config-data\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121320 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121378 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-scripts\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.121441 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825cb79-326c-4d00-84e7-f593eabeb7d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.128237 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qkq46"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.181938 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.199324 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.213090 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.219405 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.223861 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.224180 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-scripts\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.224353 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d77d2859-7ba3-4a5a-b2e2-536e824afade-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.224467 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.224572 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwskk\" (UniqueName: \"kubernetes.io/projected/d77d2859-7ba3-4a5a-b2e2-536e824afade-kube-api-access-rwskk\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.224666 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-config-data\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.228861 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d77d2859-7ba3-4a5a-b2e2-536e824afade-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.237389 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-scripts\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.239995 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.238881 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.238985 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.237498 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.241009 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77d2859-7ba3-4a5a-b2e2-536e824afade-config-data\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.252214 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.281176 4624 scope.go:117] "RemoveContainer" containerID="9c14ff5bcc5023ffe160009d9bb9ef10feb87d90a735836b900371e53b386a1c" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.281693 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwskk\" (UniqueName: \"kubernetes.io/projected/d77d2859-7ba3-4a5a-b2e2-536e824afade-kube-api-access-rwskk\") pod \"cinder-scheduler-0\" (UID: \"d77d2859-7ba3-4a5a-b2e2-536e824afade\") " pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.339004 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.339100 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-log-httpd\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.351692 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-run-httpd\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.351740 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.351781 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-config-data\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.351947 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mq2\" (UniqueName: \"kubernetes.io/projected/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-kube-api-access-q7mq2\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.352255 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-scripts\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.443382 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456596 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-run-httpd\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456652 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456683 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-config-data\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456710 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mq2\" (UniqueName: \"kubernetes.io/projected/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-kube-api-access-q7mq2\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456774 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-scripts\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456818 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.456853 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-log-httpd\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.464551 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-log-httpd\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.470378 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-run-httpd\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.483234 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.483597 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mq2\" (UniqueName: \"kubernetes.io/projected/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-kube-api-access-q7mq2\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.497374 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.500307 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-config-data\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.521993 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-scripts\") pod \"ceilometer-0\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.636464 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.658955 4624 scope.go:117] "RemoveContainer" containerID="fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.741362 4624 scope.go:117] "RemoveContainer" containerID="5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61" Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.957039 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-17d0-account-create-update-gv99f" event={"ID":"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc","Type":"ContainerStarted","Data":"6d33b8ba5be7135b759fe37fd36c46342563b111642d63c5436209096d5df6b0"} Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.957090 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-17d0-account-create-update-gv99f" event={"ID":"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc","Type":"ContainerStarted","Data":"9cab8bd987b3a9ceb80eaa7b7dbf228a6e6fd893f7d8ec67c172cfeb8cd00a31"} Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.974254 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-024b-account-create-update-gknfh" event={"ID":"5072950d-2ee4-439c-ade4-63802cc55a48","Type":"ContainerStarted","Data":"b2d7f35ed6731aad08f0ae5940ec1e8afed45565b8e3efedb47b1710b1804fea"} Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.974333 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-024b-account-create-update-gknfh" event={"ID":"5072950d-2ee4-439c-ade4-63802cc55a48","Type":"ContainerStarted","Data":"514494a928f1534c6b123496398ffd38054ef93c5bc6209b3d9f1dc6be431328"} Feb 28 03:56:15 crc kubenswrapper[4624]: I0228 03:56:15.984735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa3966ee-e42d-4dfe-a730-978481d7f497","Type":"ContainerStarted","Data":"dd801f0e7825d77082e867d666fcb006c9e31303b7d46d0dc20c300b5f982229"} Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:15.999848 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" event={"ID":"d3628d61-3ed3-4dc8-b649-9748be42d073","Type":"ContainerStarted","Data":"a3df1969b7472db690d77bf7b04efee0e5a955447edf3c5ede6988d58978c385"} Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.011988 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-024b-account-create-update-gknfh" podStartSLOduration=5.011968428 podStartE2EDuration="5.011968428s" podCreationTimestamp="2026-02-28 03:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:16.007710752 +0000 UTC m=+1230.671750061" watchObservedRunningTime="2026-02-28 03:56:16.011968428 +0000 UTC m=+1230.676007737" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.029608 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t8gp6" event={"ID":"59f46a0e-807d-4856-bb08-878dc3d19728","Type":"ContainerStarted","Data":"1ad8478efe605f5374eb075d61d8ff765c193c211b8865e5b3a20bf2eb2879b5"} Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.036542 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qkq46" event={"ID":"5506c57e-14c0-4fca-88ba-09db2ac80047","Type":"ContainerStarted","Data":"2b1492694bfc1366963f0e14d32f78a6fe3e7a08febb8fb2a83b5514a09f2b0a"} Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.040721 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-17d0-account-create-update-gv99f" podStartSLOduration=5.040704317 podStartE2EDuration="5.040704317s" podCreationTimestamp="2026-02-28 03:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:15.985969501 +0000 UTC m=+1230.650008810" watchObservedRunningTime="2026-02-28 03:56:16.040704317 +0000 UTC m=+1230.704743626" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.043371 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mj64h" event={"ID":"3d5443a5-da1f-4cf6-a6a1-bd562c45f257","Type":"ContainerStarted","Data":"0b71adf97be28e69395fce1328daedef8d8ad51900259bea3906b52be9e38035"} Feb 28 03:56:16 crc kubenswrapper[4624]: E0228 03:56:16.078916 4624 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/578473fb185920389e78793f3095ba20de319e8c1a4cc6095e0c1bc806ce6086/diff" to get inode usage: stat /var/lib/containers/storage/overlay/578473fb185920389e78793f3095ba20de319e8c1a4cc6095e0c1bc806ce6086/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_2bf41691-ef23-4f33-83f0-ebd9c2ca1d87/sg-core/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_2bf41691-ef23-4f33-83f0-ebd9c2ca1d87/sg-core/0.log: no such file or directory Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.080084 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-mj64h" podStartSLOduration=5.080032895 podStartE2EDuration="5.080032895s" podCreationTimestamp="2026-02-28 03:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:16.062595202 +0000 UTC m=+1230.726634511" watchObservedRunningTime="2026-02-28 03:56:16.080032895 +0000 UTC m=+1230.744072204" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.185468 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0825cb79-326c-4d00-84e7-f593eabeb7d7" path="/var/lib/kubelet/pods/0825cb79-326c-4d00-84e7-f593eabeb7d7/volumes" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.186593 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3388a3ea-259d-4648-a047-3f9c896f8264" path="/var/lib/kubelet/pods/3388a3ea-259d-4648-a047-3f9c896f8264/volumes" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.190015 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f3e502-df83-4a0e-8240-53d8d6d78a80" path="/var/lib/kubelet/pods/e5f3e502-df83-4a0e-8240-53d8d6d78a80/volumes" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.190625 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1" path="/var/lib/kubelet/pods/ea3f4dfa-eb68-4aa6-a3f8-d317d38982a1/volumes" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.191572 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.226804 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.172:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:56:16 crc kubenswrapper[4624]: I0228 03:56:16.467588 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.057651 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d77d2859-7ba3-4a5a-b2e2-536e824afade","Type":"ContainerStarted","Data":"ec5dc74b7bdec2214008e80d69a46aa8ca81eed6ac7074ad316adbe6163ca51b"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.060325 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerStarted","Data":"179a647314736bf89e5190e3738acaefa4fd24edf298d6f81fde333dbf6ab660"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.066357 4624 generic.go:334] "Generic (PLEG): container finished" podID="d3628d61-3ed3-4dc8-b649-9748be42d073" containerID="c310ac6a435228634ccfe630475de36378b07a86ec7bc61749739d1eb23b6d1e" exitCode=0 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.066457 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" event={"ID":"d3628d61-3ed3-4dc8-b649-9748be42d073","Type":"ContainerDied","Data":"c310ac6a435228634ccfe630475de36378b07a86ec7bc61749739d1eb23b6d1e"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.078300 4624 generic.go:334] "Generic (PLEG): container finished" podID="59f46a0e-807d-4856-bb08-878dc3d19728" containerID="4c9457d9820529b5517b6941d02d2457ed943e36538bfc1766662590702b56e5" exitCode=0 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.078451 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t8gp6" event={"ID":"59f46a0e-807d-4856-bb08-878dc3d19728","Type":"ContainerDied","Data":"4c9457d9820529b5517b6941d02d2457ed943e36538bfc1766662590702b56e5"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.085246 4624 generic.go:334] "Generic (PLEG): container finished" podID="5506c57e-14c0-4fca-88ba-09db2ac80047" containerID="c56e444347195ecf2666edcb080aa8735b0460ea2cfb76672fb393b0ae323d0e" exitCode=0 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.085347 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qkq46" event={"ID":"5506c57e-14c0-4fca-88ba-09db2ac80047","Type":"ContainerDied","Data":"c56e444347195ecf2666edcb080aa8735b0460ea2cfb76672fb393b0ae323d0e"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.093831 4624 generic.go:334] "Generic (PLEG): container finished" podID="3d5443a5-da1f-4cf6-a6a1-bd562c45f257" containerID="b0bd73a49e8ae6a04c11d686b555c198249589bd0fcb298a68c9e8d35acae9ea" exitCode=0 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.093924 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mj64h" event={"ID":"3d5443a5-da1f-4cf6-a6a1-bd562c45f257","Type":"ContainerDied","Data":"b0bd73a49e8ae6a04c11d686b555c198249589bd0fcb298a68c9e8d35acae9ea"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.106119 4624 generic.go:334] "Generic (PLEG): container finished" podID="5072950d-2ee4-439c-ade4-63802cc55a48" containerID="b2d7f35ed6731aad08f0ae5940ec1e8afed45565b8e3efedb47b1710b1804fea" exitCode=0 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.106280 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-024b-account-create-update-gknfh" event={"ID":"5072950d-2ee4-439c-ade4-63802cc55a48","Type":"ContainerDied","Data":"b2d7f35ed6731aad08f0ae5940ec1e8afed45565b8e3efedb47b1710b1804fea"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.127594 4624 generic.go:334] "Generic (PLEG): container finished" podID="1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" containerID="6d33b8ba5be7135b759fe37fd36c46342563b111642d63c5436209096d5df6b0" exitCode=0 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.127661 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-17d0-account-create-update-gv99f" event={"ID":"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc","Type":"ContainerDied","Data":"6d33b8ba5be7135b759fe37fd36c46342563b111642d63c5436209096d5df6b0"} Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.217196 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.219555 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-86b4894974-wxqfg" Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.411779 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-96545fdc6-xmzr4"] Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.412187 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-96545fdc6-xmzr4" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-log" containerID="cri-o://24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1" gracePeriod=30 Feb 28 03:56:17 crc kubenswrapper[4624]: I0228 03:56:17.412756 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-96545fdc6-xmzr4" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-api" containerID="cri-o://7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0" gracePeriod=30 Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.174653 4624 generic.go:334] "Generic (PLEG): container finished" podID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerID="24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1" exitCode=143 Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.175046 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96545fdc6-xmzr4" event={"ID":"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc","Type":"ContainerDied","Data":"24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1"} Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.186249 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d77d2859-7ba3-4a5a-b2e2-536e824afade","Type":"ContainerStarted","Data":"9db9e61c85105698bb273f1e26ded248a9d2ae8a743d508013e9e23b94bc024f"} Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.195500 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerStarted","Data":"3c98666b447fb8ae6db51e9d3e116914b4b99b4868ec4f009afd3e8ff7419b85"} Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.723749 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.897403 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9sn7\" (UniqueName: \"kubernetes.io/projected/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-kube-api-access-b9sn7\") pod \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.897696 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-operator-scripts\") pod \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\" (UID: \"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc\") " Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.899284 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" (UID: "1ee730e9-677d-4ae6-b242-dbdaee2e0ecc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:18 crc kubenswrapper[4624]: I0228 03:56:18.906435 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-kube-api-access-b9sn7" (OuterVolumeSpecName: "kube-api-access-b9sn7") pod "1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" (UID: "1ee730e9-677d-4ae6-b242-dbdaee2e0ecc"). InnerVolumeSpecName "kube-api-access-b9sn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.002640 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9sn7\" (UniqueName: \"kubernetes.io/projected/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-kube-api-access-b9sn7\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.003213 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.269488 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t8gp6" event={"ID":"59f46a0e-807d-4856-bb08-878dc3d19728","Type":"ContainerDied","Data":"1ad8478efe605f5374eb075d61d8ff765c193c211b8865e5b3a20bf2eb2879b5"} Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.269547 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad8478efe605f5374eb075d61d8ff765c193c211b8865e5b3a20bf2eb2879b5" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.269708 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.294563 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qkq46" event={"ID":"5506c57e-14c0-4fca-88ba-09db2ac80047","Type":"ContainerDied","Data":"2b1492694bfc1366963f0e14d32f78a6fe3e7a08febb8fb2a83b5514a09f2b0a"} Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.294619 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1492694bfc1366963f0e14d32f78a6fe3e7a08febb8fb2a83b5514a09f2b0a" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.331978 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-17d0-account-create-update-gv99f" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.332009 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-17d0-account-create-update-gv99f" event={"ID":"1ee730e9-677d-4ae6-b242-dbdaee2e0ecc","Type":"ContainerDied","Data":"9cab8bd987b3a9ceb80eaa7b7dbf228a6e6fd893f7d8ec67c172cfeb8cd00a31"} Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.332057 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cab8bd987b3a9ceb80eaa7b7dbf228a6e6fd893f7d8ec67c172cfeb8cd00a31" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.362875 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d77d2859-7ba3-4a5a-b2e2-536e824afade","Type":"ContainerStarted","Data":"7347b3bfcd84b1bef8f8495ee726ee571175db3ce1cfb628649f1185b689ad04"} Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.364333 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.374608 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.382142 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerStarted","Data":"09a66272fa4f939e3f477740227abb2056712264bd4dd02686aa49af75c1405e"} Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.418566 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.428737 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.428715171 podStartE2EDuration="5.428715171s" podCreationTimestamp="2026-02-28 03:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:19.409214822 +0000 UTC m=+1234.073254161" watchObservedRunningTime="2026-02-28 03:56:19.428715171 +0000 UTC m=+1234.092754480" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.434167 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5506c57e-14c0-4fca-88ba-09db2ac80047-operator-scripts\") pod \"5506c57e-14c0-4fca-88ba-09db2ac80047\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.434255 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjx88\" (UniqueName: \"kubernetes.io/projected/5506c57e-14c0-4fca-88ba-09db2ac80047-kube-api-access-gjx88\") pod \"5506c57e-14c0-4fca-88ba-09db2ac80047\" (UID: \"5506c57e-14c0-4fca-88ba-09db2ac80047\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.438006 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5506c57e-14c0-4fca-88ba-09db2ac80047-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5506c57e-14c0-4fca-88ba-09db2ac80047" (UID: "5506c57e-14c0-4fca-88ba-09db2ac80047"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.447541 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5506c57e-14c0-4fca-88ba-09db2ac80047-kube-api-access-gjx88" (OuterVolumeSpecName: "kube-api-access-gjx88") pod "5506c57e-14c0-4fca-88ba-09db2ac80047" (UID: "5506c57e-14c0-4fca-88ba-09db2ac80047"). InnerVolumeSpecName "kube-api-access-gjx88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.520998 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.536293 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbgg5\" (UniqueName: \"kubernetes.io/projected/d3628d61-3ed3-4dc8-b649-9748be42d073-kube-api-access-vbgg5\") pod \"d3628d61-3ed3-4dc8-b649-9748be42d073\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.536637 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3628d61-3ed3-4dc8-b649-9748be42d073-operator-scripts\") pod \"d3628d61-3ed3-4dc8-b649-9748be42d073\" (UID: \"d3628d61-3ed3-4dc8-b649-9748be42d073\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.536666 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79jz2\" (UniqueName: \"kubernetes.io/projected/59f46a0e-807d-4856-bb08-878dc3d19728-kube-api-access-79jz2\") pod \"59f46a0e-807d-4856-bb08-878dc3d19728\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.536793 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46a0e-807d-4856-bb08-878dc3d19728-operator-scripts\") pod \"59f46a0e-807d-4856-bb08-878dc3d19728\" (UID: \"59f46a0e-807d-4856-bb08-878dc3d19728\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.536892 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwps8\" (UniqueName: \"kubernetes.io/projected/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-kube-api-access-zwps8\") pod \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.536913 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-operator-scripts\") pod \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\" (UID: \"3d5443a5-da1f-4cf6-a6a1-bd562c45f257\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.537487 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5506c57e-14c0-4fca-88ba-09db2ac80047-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.537500 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjx88\" (UniqueName: \"kubernetes.io/projected/5506c57e-14c0-4fca-88ba-09db2ac80047-kube-api-access-gjx88\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.552998 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d5443a5-da1f-4cf6-a6a1-bd562c45f257" (UID: "3d5443a5-da1f-4cf6-a6a1-bd562c45f257"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.559463 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f46a0e-807d-4856-bb08-878dc3d19728-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59f46a0e-807d-4856-bb08-878dc3d19728" (UID: "59f46a0e-807d-4856-bb08-878dc3d19728"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.560756 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3628d61-3ed3-4dc8-b649-9748be42d073-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3628d61-3ed3-4dc8-b649-9748be42d073" (UID: "d3628d61-3ed3-4dc8-b649-9748be42d073"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.563270 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3628d61-3ed3-4dc8-b649-9748be42d073-kube-api-access-vbgg5" (OuterVolumeSpecName: "kube-api-access-vbgg5") pod "d3628d61-3ed3-4dc8-b649-9748be42d073" (UID: "d3628d61-3ed3-4dc8-b649-9748be42d073"). InnerVolumeSpecName "kube-api-access-vbgg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.592539 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-kube-api-access-zwps8" (OuterVolumeSpecName: "kube-api-access-zwps8") pod "3d5443a5-da1f-4cf6-a6a1-bd562c45f257" (UID: "3d5443a5-da1f-4cf6-a6a1-bd562c45f257"). InnerVolumeSpecName "kube-api-access-zwps8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.592968 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f46a0e-807d-4856-bb08-878dc3d19728-kube-api-access-79jz2" (OuterVolumeSpecName: "kube-api-access-79jz2") pod "59f46a0e-807d-4856-bb08-878dc3d19728" (UID: "59f46a0e-807d-4856-bb08-878dc3d19728"). InnerVolumeSpecName "kube-api-access-79jz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.639488 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhtcg\" (UniqueName: \"kubernetes.io/projected/5072950d-2ee4-439c-ade4-63802cc55a48-kube-api-access-dhtcg\") pod \"5072950d-2ee4-439c-ade4-63802cc55a48\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.639813 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072950d-2ee4-439c-ade4-63802cc55a48-operator-scripts\") pod \"5072950d-2ee4-439c-ade4-63802cc55a48\" (UID: \"5072950d-2ee4-439c-ade4-63802cc55a48\") " Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640333 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f46a0e-807d-4856-bb08-878dc3d19728-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640353 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwps8\" (UniqueName: \"kubernetes.io/projected/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-kube-api-access-zwps8\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640367 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d5443a5-da1f-4cf6-a6a1-bd562c45f257-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640380 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbgg5\" (UniqueName: \"kubernetes.io/projected/d3628d61-3ed3-4dc8-b649-9748be42d073-kube-api-access-vbgg5\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640390 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3628d61-3ed3-4dc8-b649-9748be42d073-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640401 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79jz2\" (UniqueName: \"kubernetes.io/projected/59f46a0e-807d-4856-bb08-878dc3d19728-kube-api-access-79jz2\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.640805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5072950d-2ee4-439c-ade4-63802cc55a48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5072950d-2ee4-439c-ade4-63802cc55a48" (UID: "5072950d-2ee4-439c-ade4-63802cc55a48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.648792 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5072950d-2ee4-439c-ade4-63802cc55a48-kube-api-access-dhtcg" (OuterVolumeSpecName: "kube-api-access-dhtcg") pod "5072950d-2ee4-439c-ade4-63802cc55a48" (UID: "5072950d-2ee4-439c-ade4-63802cc55a48"). InnerVolumeSpecName "kube-api-access-dhtcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.741902 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhtcg\" (UniqueName: \"kubernetes.io/projected/5072950d-2ee4-439c-ade4-63802cc55a48-kube-api-access-dhtcg\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:19 crc kubenswrapper[4624]: I0228 03:56:19.741948 4624 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5072950d-2ee4-439c-ade4-63802cc55a48-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.445517 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.474024 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-024b-account-create-update-gknfh" event={"ID":"5072950d-2ee4-439c-ade4-63802cc55a48","Type":"ContainerDied","Data":"514494a928f1534c6b123496398ffd38054ef93c5bc6209b3d9f1dc6be431328"} Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.474086 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="514494a928f1534c6b123496398ffd38054ef93c5bc6209b3d9f1dc6be431328" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.474200 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-024b-account-create-update-gknfh" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.482466 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerStarted","Data":"a29da295fed42c4d1ff38b3c25cc0f3795b5fbc331a20617bbaf14e0cb139c41"} Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.484514 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" event={"ID":"d3628d61-3ed3-4dc8-b649-9748be42d073","Type":"ContainerDied","Data":"a3df1969b7472db690d77bf7b04efee0e5a955447edf3c5ede6988d58978c385"} Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.484543 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3df1969b7472db690d77bf7b04efee0e5a955447edf3c5ede6988d58978c385" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.484611 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9cfa-account-create-update-w9qh7" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.488533 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t8gp6" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.490806 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mj64h" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.496220 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mj64h" event={"ID":"3d5443a5-da1f-4cf6-a6a1-bd562c45f257","Type":"ContainerDied","Data":"0b71adf97be28e69395fce1328daedef8d8ad51900259bea3906b52be9e38035"} Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.496281 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b71adf97be28e69395fce1328daedef8d8ad51900259bea3906b52be9e38035" Feb 28 03:56:20 crc kubenswrapper[4624]: I0228 03:56:20.496633 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkq46" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.154681 4624 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/39e4db26f262b0de8401bdac2b9b23023527f2c7a98bcbd0c52c29f89c987595/diff" to get inode usage: stat /var/lib/containers/storage/overlay/39e4db26f262b0de8401bdac2b9b23023527f2c7a98bcbd0c52c29f89c987595/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_barbican-api-5b6cbb6d88-bsx2h_ae632f24-f74a-413a-9835-599c21020eb5/barbican-api/0.log" to get inode usage: stat /var/log/pods/openstack_barbican-api-5b6cbb6d88-bsx2h_ae632f24-f74a-413a-9835-599c21020eb5/barbican-api/0.log: no such file or directory Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.401134 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.475161 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-combined-ca-bundle\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.475358 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhhvl\" (UniqueName: \"kubernetes.io/projected/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-kube-api-access-fhhvl\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.476383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-internal-tls-certs\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.476467 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-config-data\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.476556 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-scripts\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.476581 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-logs\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.476697 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-public-tls-certs\") pod \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\" (UID: \"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc\") " Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.482235 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-logs" (OuterVolumeSpecName: "logs") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.502249 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-scripts" (OuterVolumeSpecName: "scripts") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.503382 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-kube-api-access-fhhvl" (OuterVolumeSpecName: "kube-api-access-fhhvl") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "kube-api-access-fhhvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.551135 4624 generic.go:334] "Generic (PLEG): container finished" podID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerID="7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0" exitCode=0 Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.552405 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96545fdc6-xmzr4" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.556004 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96545fdc6-xmzr4" event={"ID":"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc","Type":"ContainerDied","Data":"7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0"} Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.556044 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96545fdc6-xmzr4" event={"ID":"060f6b4f-b7fd-4bf4-836f-98f6e10d26bc","Type":"ContainerDied","Data":"30e10c2767bb96c30db9517e6e114dd489191635f32feff0ac7267f0000c9221"} Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.556061 4624 scope.go:117] "RemoveContainer" containerID="7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.584325 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.584361 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.584371 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhhvl\" (UniqueName: \"kubernetes.io/projected/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-kube-api-access-fhhvl\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.596640 4624 scope.go:117] "RemoveContainer" containerID="24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.622174 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.673337 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-config-data" (OuterVolumeSpecName: "config-data") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.687650 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.687703 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.775381 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.804793 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.831611 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" (UID: "060f6b4f-b7fd-4bf4-836f-98f6e10d26bc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.893961 4624 scope.go:117] "RemoveContainer" containerID="7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.904347 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0\": container with ID starting with 7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0 not found: ID does not exist" containerID="7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.909898 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.904415 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0"} err="failed to get container status \"7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0\": rpc error: code = NotFound desc = could not find container \"7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0\": container with ID starting with 7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0 not found: ID does not exist" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.912875 4624 scope.go:117] "RemoveContainer" containerID="24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.922275 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1\": container with ID starting with 24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1 not found: ID does not exist" containerID="24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.922362 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1"} err="failed to get container status \"24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1\": rpc error: code = NotFound desc = could not find container \"24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1\": container with ID starting with 24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1 not found: ID does not exist" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951381 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fr975"] Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.951844 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5072950d-2ee4-439c-ade4-63802cc55a48" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951862 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5072950d-2ee4-439c-ade4-63802cc55a48" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.951888 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951896 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.951923 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5506c57e-14c0-4fca-88ba-09db2ac80047" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951930 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="5506c57e-14c0-4fca-88ba-09db2ac80047" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.951945 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f46a0e-807d-4856-bb08-878dc3d19728" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951952 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f46a0e-807d-4856-bb08-878dc3d19728" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.951968 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3628d61-3ed3-4dc8-b649-9748be42d073" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951976 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3628d61-3ed3-4dc8-b649-9748be42d073" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.951989 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-api" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.951996 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-api" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.952015 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5443a5-da1f-4cf6-a6a1-bd562c45f257" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952022 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5443a5-da1f-4cf6-a6a1-bd562c45f257" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: E0228 03:56:21.952031 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-log" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952040 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-log" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952239 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5506c57e-14c0-4fca-88ba-09db2ac80047" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952258 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5443a5-da1f-4cf6-a6a1-bd562c45f257" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952267 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="5072950d-2ee4-439c-ade4-63802cc55a48" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952288 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-api" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952297 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3628d61-3ed3-4dc8-b649-9748be42d073" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952305 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f46a0e-807d-4856-bb08-878dc3d19728" containerName="mariadb-database-create" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952313 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" containerName="placement-log" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.952323 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" containerName="mariadb-account-create-update" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.953071 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.959214 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nwwpd" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.959631 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.959795 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.969942 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fr975"] Feb 28 03:56:21 crc kubenswrapper[4624]: I0228 03:56:21.981178 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-96545fdc6-xmzr4"] Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.013840 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-96545fdc6-xmzr4"] Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.104986 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060f6b4f-b7fd-4bf4-836f-98f6e10d26bc" path="/var/lib/kubelet/pods/060f6b4f-b7fd-4bf4-836f-98f6e10d26bc/volumes" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.117554 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.117610 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-scripts\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.117880 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlqs\" (UniqueName: \"kubernetes.io/projected/46302b23-1f0a-4e63-948a-bcc402ca3dc1-kube-api-access-pnlqs\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.118253 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-config-data\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.220767 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.220849 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-scripts\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.220890 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlqs\" (UniqueName: \"kubernetes.io/projected/46302b23-1f0a-4e63-948a-bcc402ca3dc1-kube-api-access-pnlqs\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.220964 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-config-data\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.227150 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-scripts\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.229803 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-config-data\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.232775 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.246919 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlqs\" (UniqueName: \"kubernetes.io/projected/46302b23-1f0a-4e63-948a-bcc402ca3dc1-kube-api-access-pnlqs\") pod \"nova-cell0-conductor-db-sync-fr975\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.284690 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.579720 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerStarted","Data":"58e5fdacef8545aa12cf41a1023f75d5d84750fb74cd54820aa1a465a9ea6471"} Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.581372 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.626361 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5771525669999997 podStartE2EDuration="7.62614176s" podCreationTimestamp="2026-02-28 03:56:15 +0000 UTC" firstStartedPulling="2026-02-28 03:56:16.508895841 +0000 UTC m=+1231.172935150" lastFinishedPulling="2026-02-28 03:56:21.557885034 +0000 UTC m=+1236.221924343" observedRunningTime="2026-02-28 03:56:22.620381134 +0000 UTC m=+1237.284420453" watchObservedRunningTime="2026-02-28 03:56:22.62614176 +0000 UTC m=+1237.290181069" Feb 28 03:56:22 crc kubenswrapper[4624]: I0228 03:56:22.743918 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fr975"] Feb 28 03:56:23 crc kubenswrapper[4624]: I0228 03:56:23.593308 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fr975" event={"ID":"46302b23-1f0a-4e63-948a-bcc402ca3dc1","Type":"ContainerStarted","Data":"988048281e39709bddf93932e08846135889ff9074b80aa3814e4b217608b461"} Feb 28 03:56:23 crc kubenswrapper[4624]: E0228 03:56:23.883598 4624 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae632f24_f74a_413a_9835_599c21020eb5.slice/crio-7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3: Error finding container 7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3: Status 404 returned error can't find the container with id 7b150c6bfdf45aea432b09a216747166902183cd53d440d59a72682d41cdb9c3 Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.890568 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-conmon-382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-conmon-382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.890766 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.891078 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-conmon-5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-conmon-5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.891116 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-5bc927e6ae6e8c245a33929819cf751356b763c6f96d410a15ebdbdf25d6dd61.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.891135 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-conmon-e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-conmon-e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.891162 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a8f10e_9109_4a57_b870_9f337557365d.slice/crio-e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.891178 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-conmon-fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-conmon-fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.891195 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-fd31ad766fe41c3d85141ab674841cd93a3c2f0db422bc5799eb5bc8fd56ba2a.scope: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.902370 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0825cb79_326c_4d00_84e7_f593eabeb7d7.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0825cb79_326c_4d00_84e7_f593eabeb7d7.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.909273 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc57d1ad8_41f4_45c4_8823_9b854dcf073e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc57d1ad8_41f4_45c4_8823_9b854dcf073e.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.920256 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f46a0e_807d_4856_bb08_878dc3d19728.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f46a0e_807d_4856_bb08_878dc3d19728.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.921387 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5506c57e_14c0_4fca_88ba_09db2ac80047.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5506c57e_14c0_4fca_88ba_09db2ac80047.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.921498 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ee730e9_677d_4ae6_b242_dbdaee2e0ecc.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ee730e9_677d_4ae6_b242_dbdaee2e0ecc.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.921592 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d5443a5_da1f_4cf6_a6a1_bd562c45f257.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d5443a5_da1f_4cf6_a6a1_bd562c45f257.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.921824 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3628d61_3ed3_4dc8_b649_9748be42d073.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3628d61_3ed3_4dc8_b649_9748be42d073.slice: no such file or directory Feb 28 03:56:23 crc kubenswrapper[4624]: W0228 03:56:23.921955 4624 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5072950d_2ee4_439c_ade4_63802cc55a48.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5072950d_2ee4_439c_ade4_63802cc55a48.slice: no such file or directory Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.145976 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.148699 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.169615 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:56:24 crc kubenswrapper[4624]: E0228 03:56:24.320201 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ccc2a9a_c3cc_4ddb_a700_86713957337e.slice/crio-5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod030a5f79_331e_4d94_98e9_67ebca169648.slice/crio-46d628c29cf79d07f54bf2a8ddf75b40dc6fda0600f62934492f458ccc08a464\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod030a5f79_331e_4d94_98e9_67ebca169648.slice/crio-conmon-af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32836bdc_f650_4a3a_b1f9_21de1a2992e3.slice/crio-b7ba5564aa1c99b564d1863d34fdec3dc70b4d1b2faad3c8cd4c226050a2a62f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod030a5f79_331e_4d94_98e9_67ebca169648.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f3e502_df83_4a0e_8240_53d8d6d78a80.slice/crio-f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060f6b4f_b7fd_4bf4_836f_98f6e10d26bc.slice/crio-conmon-24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060f6b4f_b7fd_4bf4_836f_98f6e10d26bc.slice/crio-7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f3e502_df83_4a0e_8240_53d8d6d78a80.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ccc2a9a_c3cc_4ddb_a700_86713957337e.slice/crio-conmon-5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf41691_ef23_4f33_83f0_ebd9c2ca1d87.slice/crio-conmon-c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe61ab6a_7cb4_40a0_9658_0c58aaeba834.slice/crio-ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32836bdc_f650_4a3a_b1f9_21de1a2992e3.slice/crio-conmon-67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3388a3ea_259d_4648_a047_3f9c896f8264.slice/crio-b91f7c15122c4ee25b3d1b8cdeba20448e8ad2ab19595702f1c2179e3e249786\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42c908a_802d_416b_a7de_066df6e008bd.slice/crio-8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf41691_ef23_4f33_83f0_ebd9c2ca1d87.slice/crio-c17229b9e5e420783d6899bd89ab2a57b3f806674e69b22f9981ef5a36fc496a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060f6b4f_b7fd_4bf4_836f_98f6e10d26bc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf41691_ef23_4f33_83f0_ebd9c2ca1d87.slice/crio-95d452a624edafd2bd122f41fd5aa727cd6e26c18fe907864c317e8c55a96864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1103dd_2624_40c7_9cc4_cf55c51633a2.slice/crio-conmon-0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf41691_ef23_4f33_83f0_ebd9c2ca1d87.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060f6b4f_b7fd_4bf4_836f_98f6e10d26bc.slice/crio-30e10c2767bb96c30db9517e6e114dd489191635f32feff0ac7267f0000c9221\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060f6b4f_b7fd_4bf4_836f_98f6e10d26bc.slice/crio-conmon-7b4e15c94ad80b60dcb3a6e2bcd5679ba4c19fde8610f1f2dfabe2a30cc915a0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42c908a_802d_416b_a7de_066df6e008bd.slice/crio-9c93ce3ca8046cf05b88d534c322db9695835608ff666e0e07af9539a8919bc9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060f6b4f_b7fd_4bf4_836f_98f6e10d26bc.slice/crio-24eb2752a6da6d84f60cb202a2e0ae0b0a604884597f96b0da55f04a2ac5e1d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae632f24_f74a_413a_9835_599c21020eb5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe61ab6a_7cb4_40a0_9658_0c58aaeba834.slice/crio-conmon-ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf41691_ef23_4f33_83f0_ebd9c2ca1d87.slice/crio-42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42c908a_802d_416b_a7de_066df6e008bd.slice/crio-conmon-8ef5eb7e3fd894a6bad65dd605b989e6f0cba630e5c66c69dfd00244fd28c808.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf41691_ef23_4f33_83f0_ebd9c2ca1d87.slice/crio-conmon-42b9a9e60de362662811c6025c6c813f925520f45e15df5a17a27e597c1a74ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f3e502_df83_4a0e_8240_53d8d6d78a80.slice/crio-9fefb793d8a707565be6a2404b1be2a72d324759c12bd748a4ad66d254dd6c43\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f3e502_df83_4a0e_8240_53d8d6d78a80.slice/crio-conmon-0299974817a49a23813b6a4e0541439b33a4eb4cdaf556cd7a52e1dc71f6dd3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f3e502_df83_4a0e_8240_53d8d6d78a80.slice/crio-conmon-f020a26c6fee3aaf9a5700577cbbbe4db94571508ca367639f806ab858772072.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32836bdc_f650_4a3a_b1f9_21de1a2992e3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod030a5f79_331e_4d94_98e9_67ebca169648.slice/crio-af0a50f2c6d769d29fe275cf94cc79c73a6b5f8347e3680dce64a32b67626c75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1103dd_2624_40c7_9cc4_cf55c51633a2.slice/crio-0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32836bdc_f650_4a3a_b1f9_21de1a2992e3.slice/crio-67283efdbdf072845938596cbba658107250027e585af20db99c9e4b02c05204.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42c908a_802d_416b_a7de_066df6e008bd.slice\": RecentStats: unable to find data in memory cache]" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.322495 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.322838 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.327266 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.382065 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.471970 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data-custom\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.472197 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-combined-ca-bundle\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.472246 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-scripts\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.472284 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfwzr\" (UniqueName: \"kubernetes.io/projected/63a8f10e-9109-4a57-b870-9f337557365d-kube-api-access-cfwzr\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.472354 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.472411 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63a8f10e-9109-4a57-b870-9f337557365d-etc-machine-id\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.472434 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a8f10e-9109-4a57-b870-9f337557365d-logs\") pod \"63a8f10e-9109-4a57-b870-9f337557365d\" (UID: \"63a8f10e-9109-4a57-b870-9f337557365d\") " Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.479600 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63a8f10e-9109-4a57-b870-9f337557365d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.488312 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a8f10e-9109-4a57-b870-9f337557365d-logs" (OuterVolumeSpecName: "logs") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.494037 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.496430 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a8f10e-9109-4a57-b870-9f337557365d-kube-api-access-cfwzr" (OuterVolumeSpecName: "kube-api-access-cfwzr") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "kube-api-access-cfwzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.510344 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-scripts" (OuterVolumeSpecName: "scripts") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.575180 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.575211 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfwzr\" (UniqueName: \"kubernetes.io/projected/63a8f10e-9109-4a57-b870-9f337557365d-kube-api-access-cfwzr\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.575225 4624 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63a8f10e-9109-4a57-b870-9f337557365d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.575234 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a8f10e-9109-4a57-b870-9f337557365d-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.575243 4624 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.576447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data" (OuterVolumeSpecName: "config-data") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.606359 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a8f10e-9109-4a57-b870-9f337557365d" (UID: "63a8f10e-9109-4a57-b870-9f337557365d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.622159 4624 generic.go:334] "Generic (PLEG): container finished" podID="63a8f10e-9109-4a57-b870-9f337557365d" containerID="e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985" exitCode=137 Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.622327 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.623861 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63a8f10e-9109-4a57-b870-9f337557365d","Type":"ContainerDied","Data":"e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985"} Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.623908 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"63a8f10e-9109-4a57-b870-9f337557365d","Type":"ContainerDied","Data":"09e4587a3582dbfe042c079ac49ca5b52c46ecc5fc6edf58567365b47bfb0d21"} Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.623930 4624 scope.go:117] "RemoveContainer" containerID="e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.680863 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.680896 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a8f10e-9109-4a57-b870-9f337557365d-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.701651 4624 scope.go:117] "RemoveContainer" containerID="382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.705219 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.743631 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.754739 4624 scope.go:117] "RemoveContainer" containerID="e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985" Feb 28 03:56:24 crc kubenswrapper[4624]: E0228 03:56:24.756805 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985\": container with ID starting with e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985 not found: ID does not exist" containerID="e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.756864 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985"} err="failed to get container status \"e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985\": rpc error: code = NotFound desc = could not find container \"e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985\": container with ID starting with e2e6ba0d8d9e2027478116d0dfc2021aa10b2dbe44019f59af9e3af8bc807985 not found: ID does not exist" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.756896 4624 scope.go:117] "RemoveContainer" containerID="382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd" Feb 28 03:56:24 crc kubenswrapper[4624]: E0228 03:56:24.757726 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd\": container with ID starting with 382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd not found: ID does not exist" containerID="382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.757766 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd"} err="failed to get container status \"382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd\": rpc error: code = NotFound desc = could not find container \"382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd\": container with ID starting with 382a54db82e2aa90d07140d1e9f46594838ccd24da403421702125ad963614fd not found: ID does not exist" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.772968 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:56:24 crc kubenswrapper[4624]: E0228 03:56:24.773454 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api-log" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.773473 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api-log" Feb 28 03:56:24 crc kubenswrapper[4624]: E0228 03:56:24.773491 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.773498 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.773698 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api-log" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.773727 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a8f10e-9109-4a57-b870-9f337557365d" containerName="cinder-api" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.774831 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.785665 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.785679 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.786076 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.793014 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890156 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5wlp\" (UniqueName: \"kubernetes.io/projected/1343dbdf-afca-44d9-a8b3-828c71fe25a1-kube-api-access-j5wlp\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890206 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890246 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1343dbdf-afca-44d9-a8b3-828c71fe25a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890262 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890347 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890368 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-config-data\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890450 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890484 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-scripts\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.890510 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1343dbdf-afca-44d9-a8b3-828c71fe25a1-logs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.991977 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-scripts\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992603 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1343dbdf-afca-44d9-a8b3-828c71fe25a1-logs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992734 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5wlp\" (UniqueName: \"kubernetes.io/projected/1343dbdf-afca-44d9-a8b3-828c71fe25a1-kube-api-access-j5wlp\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992841 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1343dbdf-afca-44d9-a8b3-828c71fe25a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992858 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992910 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992933 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-config-data\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.992973 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.993772 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1343dbdf-afca-44d9-a8b3-828c71fe25a1-logs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:24 crc kubenswrapper[4624]: I0228 03:56:24.995401 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1343dbdf-afca-44d9-a8b3-828c71fe25a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.006935 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.007101 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.008058 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-scripts\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.008400 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-config-data\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.017563 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.019911 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1343dbdf-afca-44d9-a8b3-828c71fe25a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.020524 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5wlp\" (UniqueName: \"kubernetes.io/projected/1343dbdf-afca-44d9-a8b3-828c71fe25a1-kube-api-access-j5wlp\") pod \"cinder-api-0\" (UID: \"1343dbdf-afca-44d9-a8b3-828c71fe25a1\") " pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.120237 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.125953 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.654523 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-central-agent" containerID="cri-o://3c98666b447fb8ae6db51e9d3e116914b4b99b4868ec4f009afd3e8ff7419b85" gracePeriod=30 Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.655451 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="proxy-httpd" containerID="cri-o://58e5fdacef8545aa12cf41a1023f75d5d84750fb74cd54820aa1a465a9ea6471" gracePeriod=30 Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.655514 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="sg-core" containerID="cri-o://a29da295fed42c4d1ff38b3c25cc0f3795b5fbc331a20617bbaf14e0cb139c41" gracePeriod=30 Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.655562 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-notification-agent" containerID="cri-o://09a66272fa4f939e3f477740227abb2056712264bd4dd02686aa49af75c1405e" gracePeriod=30 Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.748961 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 03:56:25 crc kubenswrapper[4624]: I0228 03:56:25.835073 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.130588 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a8f10e-9109-4a57-b870-9f337557365d" path="/var/lib/kubelet/pods/63a8f10e-9109-4a57-b870-9f337557365d/volumes" Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.669549 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1343dbdf-afca-44d9-a8b3-828c71fe25a1","Type":"ContainerStarted","Data":"34f1765f38eb33927169b6582e503b7bf4b07c47e35976f541c28d68c2e76035"} Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.681521 4624 generic.go:334] "Generic (PLEG): container finished" podID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerID="58e5fdacef8545aa12cf41a1023f75d5d84750fb74cd54820aa1a465a9ea6471" exitCode=0 Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.681582 4624 generic.go:334] "Generic (PLEG): container finished" podID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerID="a29da295fed42c4d1ff38b3c25cc0f3795b5fbc331a20617bbaf14e0cb139c41" exitCode=2 Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.681593 4624 generic.go:334] "Generic (PLEG): container finished" podID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerID="09a66272fa4f939e3f477740227abb2056712264bd4dd02686aa49af75c1405e" exitCode=0 Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.681629 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerDied","Data":"58e5fdacef8545aa12cf41a1023f75d5d84750fb74cd54820aa1a465a9ea6471"} Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.681671 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerDied","Data":"a29da295fed42c4d1ff38b3c25cc0f3795b5fbc331a20617bbaf14e0cb139c41"} Feb 28 03:56:26 crc kubenswrapper[4624]: I0228 03:56:26.681689 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerDied","Data":"09a66272fa4f939e3f477740227abb2056712264bd4dd02686aa49af75c1405e"} Feb 28 03:56:27 crc kubenswrapper[4624]: I0228 03:56:27.699208 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1343dbdf-afca-44d9-a8b3-828c71fe25a1","Type":"ContainerStarted","Data":"733f0d81d0c0e82a0d0d8f37f5ff888a35e655fd76f0204505994fcf415f976f"} Feb 28 03:56:27 crc kubenswrapper[4624]: I0228 03:56:27.699717 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 03:56:27 crc kubenswrapper[4624]: I0228 03:56:27.699728 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1343dbdf-afca-44d9-a8b3-828c71fe25a1","Type":"ContainerStarted","Data":"de4961708ba73cc3ff2caf9b9f8beb021a365579a6cbdf9f34c4766a051a2da3"} Feb 28 03:56:27 crc kubenswrapper[4624]: I0228 03:56:27.741305 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.741282269 podStartE2EDuration="3.741282269s" podCreationTimestamp="2026-02-28 03:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:27.730367592 +0000 UTC m=+1242.394406901" watchObservedRunningTime="2026-02-28 03:56:27.741282269 +0000 UTC m=+1242.405321578" Feb 28 03:56:29 crc kubenswrapper[4624]: I0228 03:56:29.728453 4624 generic.go:334] "Generic (PLEG): container finished" podID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerID="3c98666b447fb8ae6db51e9d3e116914b4b99b4868ec4f009afd3e8ff7419b85" exitCode=0 Feb 28 03:56:29 crc kubenswrapper[4624]: I0228 03:56:29.728509 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerDied","Data":"3c98666b447fb8ae6db51e9d3e116914b4b99b4868ec4f009afd3e8ff7419b85"} Feb 28 03:56:34 crc kubenswrapper[4624]: I0228 03:56:34.154902 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:56:34 crc kubenswrapper[4624]: I0228 03:56:34.323821 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:56:38 crc kubenswrapper[4624]: I0228 03:56:38.658625 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 28 03:56:39 crc kubenswrapper[4624]: I0228 03:56:39.131403 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="1343dbdf-afca-44d9-a8b3-828c71fe25a1" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.187:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:56:39 crc kubenswrapper[4624]: E0228 03:56:39.952298 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 28 03:56:39 crc kubenswrapper[4624]: E0228 03:56:39.952471 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnlqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-fr975_openstack(46302b23-1f0a-4e63-948a-bcc402ca3dc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 03:56:39 crc kubenswrapper[4624]: E0228 03:56:39.953697 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-fr975" podUID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.493905 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.592992 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-config-data\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593066 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-run-httpd\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593113 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7mq2\" (UniqueName: \"kubernetes.io/projected/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-kube-api-access-q7mq2\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593191 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-sg-core-conf-yaml\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593231 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-combined-ca-bundle\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593263 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-log-httpd\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593281 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-scripts\") pod \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\" (UID: \"05d65bf1-16eb-469d-81c5-8dfe7a14c79e\") " Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.593627 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.594743 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.605978 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-kube-api-access-q7mq2" (OuterVolumeSpecName: "kube-api-access-q7mq2") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "kube-api-access-q7mq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.612945 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-scripts" (OuterVolumeSpecName: "scripts") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.690366 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.695497 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.695539 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7mq2\" (UniqueName: \"kubernetes.io/projected/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-kube-api-access-q7mq2\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.695550 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.695559 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.695572 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.791483 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.800850 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.855871 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-config-data" (OuterVolumeSpecName: "config-data") pod "05d65bf1-16eb-469d-81c5-8dfe7a14c79e" (UID: "05d65bf1-16eb-469d-81c5-8dfe7a14c79e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.902308 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d65bf1-16eb-469d-81c5-8dfe7a14c79e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.917189 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.925876 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05d65bf1-16eb-469d-81c5-8dfe7a14c79e","Type":"ContainerDied","Data":"179a647314736bf89e5190e3738acaefa4fd24edf298d6f81fde333dbf6ab660"} Feb 28 03:56:40 crc kubenswrapper[4624]: I0228 03:56:40.925932 4624 scope.go:117] "RemoveContainer" containerID="58e5fdacef8545aa12cf41a1023f75d5d84750fb74cd54820aa1a465a9ea6471" Feb 28 03:56:40 crc kubenswrapper[4624]: E0228 03:56:40.930421 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-fr975" podUID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.005098 4624 scope.go:117] "RemoveContainer" containerID="a29da295fed42c4d1ff38b3c25cc0f3795b5fbc331a20617bbaf14e0cb139c41" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.043900 4624 scope.go:117] "RemoveContainer" containerID="09a66272fa4f939e3f477740227abb2056712264bd4dd02686aa49af75c1405e" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.064141 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.074709 4624 scope.go:117] "RemoveContainer" containerID="3c98666b447fb8ae6db51e9d3e116914b4b99b4868ec4f009afd3e8ff7419b85" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.078829 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.131383 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:41 crc kubenswrapper[4624]: E0228 03:56:41.133240 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="proxy-httpd" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.133268 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="proxy-httpd" Feb 28 03:56:41 crc kubenswrapper[4624]: E0228 03:56:41.133304 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-notification-agent" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.133312 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-notification-agent" Feb 28 03:56:41 crc kubenswrapper[4624]: E0228 03:56:41.133329 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-central-agent" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.133337 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-central-agent" Feb 28 03:56:41 crc kubenswrapper[4624]: E0228 03:56:41.133362 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="sg-core" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.133369 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="sg-core" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.134123 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="sg-core" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.134159 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-central-agent" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.134182 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="ceilometer-notification-agent" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.134207 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" containerName="proxy-httpd" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.138432 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.148380 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.150981 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.218176 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.235575 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-run-httpd\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.235657 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.236310 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.236475 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-config-data\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.237047 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-scripts\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.237275 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-log-httpd\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.237377 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ttb\" (UniqueName: \"kubernetes.io/projected/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-kube-api-access-t6ttb\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.339760 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-config-data\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.339879 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-scripts\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.339919 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-log-httpd\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.339957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6ttb\" (UniqueName: \"kubernetes.io/projected/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-kube-api-access-t6ttb\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.340045 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-run-httpd\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.340068 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.340147 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.340765 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-log-httpd\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.340973 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-run-httpd\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.349146 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-scripts\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.350922 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-config-data\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.351145 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.360992 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.367021 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6ttb\" (UniqueName: \"kubernetes.io/projected/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-kube-api-access-t6ttb\") pod \"ceilometer-0\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " pod="openstack/ceilometer-0" Feb 28 03:56:41 crc kubenswrapper[4624]: I0228 03:56:41.499627 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.064211 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.110357 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d65bf1-16eb-469d-81c5-8dfe7a14c79e" path="/var/lib/kubelet/pods/05d65bf1-16eb-469d-81c5-8dfe7a14c79e/volumes" Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.310572 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.729500 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.730272 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-log" containerID="cri-o://0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff" gracePeriod=30 Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.730810 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-httpd" containerID="cri-o://7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf" gracePeriod=30 Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.968974 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerStarted","Data":"cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb"} Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.969045 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerStarted","Data":"6c8d499cb470783fde62db8ec1718750c97614fb6c4e1a306fd95ff9009ba795"} Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.971610 4624 generic.go:334] "Generic (PLEG): container finished" podID="0e91936f-89cc-4665-b489-773e9c2682f2" containerID="0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff" exitCode=143 Feb 28 03:56:42 crc kubenswrapper[4624]: I0228 03:56:42.971656 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e91936f-89cc-4665-b489-773e9c2682f2","Type":"ContainerDied","Data":"0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff"} Feb 28 03:56:43 crc kubenswrapper[4624]: I0228 03:56:43.982426 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerStarted","Data":"15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73"} Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.110122 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.110365 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-log" containerID="cri-o://68b8efad63e5578a813fc70881d06e065546606f2906634fb5d2972b3ed539a8" gracePeriod=30 Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.110731 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-httpd" containerID="cri-o://9170f7af524ba32be9556b97fb7fe1997d75397cc1cab90b26745109e85b852f" gracePeriod=30 Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.147197 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.147329 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.148630 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7f2e0d50f88199a063c27e79239b0538235bbf6111c30dfd8d0c32d33e145b7c"} pod="openstack/horizon-5b4bc59cd8-fkd4p" containerMessage="Container horizon failed startup probe, will be restarted" Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.148695 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" containerID="cri-o://7f2e0d50f88199a063c27e79239b0538235bbf6111c30dfd8d0c32d33e145b7c" gracePeriod=30 Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.322245 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.322641 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.323633 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"940754d093426155a0bfd9f597844251410b1b7303859424203ebbb0061de2e3"} pod="openstack/horizon-6cc988c5cd-svksm" containerMessage="Container horizon failed startup probe, will be restarted" Feb 28 03:56:44 crc kubenswrapper[4624]: I0228 03:56:44.323753 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" containerID="cri-o://940754d093426155a0bfd9f597844251410b1b7303859424203ebbb0061de2e3" gracePeriod=30 Feb 28 03:56:45 crc kubenswrapper[4624]: I0228 03:56:45.006975 4624 generic.go:334] "Generic (PLEG): container finished" podID="f2113d6b-377f-426a-9886-d0cd608558b1" containerID="68b8efad63e5578a813fc70881d06e065546606f2906634fb5d2972b3ed539a8" exitCode=143 Feb 28 03:56:45 crc kubenswrapper[4624]: I0228 03:56:45.007429 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2113d6b-377f-426a-9886-d0cd608558b1","Type":"ContainerDied","Data":"68b8efad63e5578a813fc70881d06e065546606f2906634fb5d2972b3ed539a8"} Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.667954 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.775988 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-combined-ca-bundle\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776561 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5ph5\" (UniqueName: \"kubernetes.io/projected/0e91936f-89cc-4665-b489-773e9c2682f2-kube-api-access-t5ph5\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776589 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-public-tls-certs\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776614 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-scripts\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776664 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-config-data\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776764 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-httpd-run\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.776880 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-logs\") pod \"0e91936f-89cc-4665-b489-773e9c2682f2\" (UID: \"0e91936f-89cc-4665-b489-773e9c2682f2\") " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.777438 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.777841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-logs" (OuterVolumeSpecName: "logs") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.784892 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e91936f-89cc-4665-b489-773e9c2682f2-kube-api-access-t5ph5" (OuterVolumeSpecName: "kube-api-access-t5ph5") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "kube-api-access-t5ph5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.785391 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-scripts" (OuterVolumeSpecName: "scripts") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.801255 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.867680 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.881269 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5ph5\" (UniqueName: \"kubernetes.io/projected/0e91936f-89cc-4665-b489-773e9c2682f2-kube-api-access-t5ph5\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.882751 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.882911 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.883040 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.883272 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91936f-89cc-4665-b489-773e9c2682f2-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.883424 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.902108 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-config-data" (OuterVolumeSpecName: "config-data") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.923632 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.931217 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e91936f-89cc-4665-b489-773e9c2682f2" (UID: "0e91936f-89cc-4665-b489-773e9c2682f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.987042 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.987132 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91936f-89cc-4665-b489-773e9c2682f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:46 crc kubenswrapper[4624]: I0228 03:56:46.987147 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.031601 4624 generic.go:334] "Generic (PLEG): container finished" podID="0e91936f-89cc-4665-b489-773e9c2682f2" containerID="7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf" exitCode=0 Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.031660 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e91936f-89cc-4665-b489-773e9c2682f2","Type":"ContainerDied","Data":"7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf"} Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.031718 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.032038 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e91936f-89cc-4665-b489-773e9c2682f2","Type":"ContainerDied","Data":"83188a0e1314dba0d9829c33020240bef489f3998cb4a65f935c686bc1095a1a"} Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.032217 4624 scope.go:117] "RemoveContainer" containerID="7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.035561 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerStarted","Data":"bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5"} Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.117449 4624 scope.go:117] "RemoveContainer" containerID="0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.168162 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.188316 4624 scope.go:117] "RemoveContainer" containerID="7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.192552 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:56:47 crc kubenswrapper[4624]: E0228 03:56:47.208448 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf\": container with ID starting with 7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf not found: ID does not exist" containerID="7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.208586 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf"} err="failed to get container status \"7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf\": rpc error: code = NotFound desc = could not find container \"7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf\": container with ID starting with 7476cedd97395017580bb1f6b26dd7da9475f2630b08a93bdcac408d703e3abf not found: ID does not exist" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.208646 4624 scope.go:117] "RemoveContainer" containerID="0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff" Feb 28 03:56:47 crc kubenswrapper[4624]: E0228 03:56:47.217641 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff\": container with ID starting with 0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff not found: ID does not exist" containerID="0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.217819 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff"} err="failed to get container status \"0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff\": rpc error: code = NotFound desc = could not find container \"0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff\": container with ID starting with 0924a2b6a86c598e8cc84895ec79155a41609431bd380ec403b51b16d40ce9ff not found: ID does not exist" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.302989 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:56:47 crc kubenswrapper[4624]: E0228 03:56:47.304397 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-httpd" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.304444 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-httpd" Feb 28 03:56:47 crc kubenswrapper[4624]: E0228 03:56:47.304496 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-log" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.304503 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-log" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.305152 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-httpd" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.305196 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" containerName="glance-log" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.306915 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.319684 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.320056 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.338719 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.425328 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.425844 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.425882 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.425932 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.425961 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.425998 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lnmt\" (UniqueName: \"kubernetes.io/projected/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-kube-api-access-6lnmt\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.426030 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.426093 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.528884 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529013 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529170 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lnmt\" (UniqueName: \"kubernetes.io/projected/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-kube-api-access-6lnmt\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529296 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529478 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529615 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529617 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529637 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.529723 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.530546 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.530576 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.546509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.548320 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.548658 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.552787 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.557344 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lnmt\" (UniqueName: \"kubernetes.io/projected/593dc11b-6b54-49d4-b9d9-c233b6ecd3ca-kube-api-access-6lnmt\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.563678 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca\") " pod="openstack/glance-default-external-api-0" Feb 28 03:56:47 crc kubenswrapper[4624]: I0228 03:56:47.646490 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.083623 4624 generic.go:334] "Generic (PLEG): container finished" podID="f2113d6b-377f-426a-9886-d0cd608558b1" containerID="9170f7af524ba32be9556b97fb7fe1997d75397cc1cab90b26745109e85b852f" exitCode=0 Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.083991 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2113d6b-377f-426a-9886-d0cd608558b1","Type":"ContainerDied","Data":"9170f7af524ba32be9556b97fb7fe1997d75397cc1cab90b26745109e85b852f"} Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.106920 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e91936f-89cc-4665-b489-773e9c2682f2" path="/var/lib/kubelet/pods/0e91936f-89cc-4665-b489-773e9c2682f2/volumes" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.472338 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.519037 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.674249 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-combined-ca-bundle\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.674353 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-httpd-run\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.674876 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.675292 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwvth\" (UniqueName: \"kubernetes.io/projected/f2113d6b-377f-426a-9886-d0cd608558b1-kube-api-access-pwvth\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.675455 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.675830 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-config-data\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.676466 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-internal-tls-certs\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.676596 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-logs\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.676921 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-scripts\") pod \"f2113d6b-377f-426a-9886-d0cd608558b1\" (UID: \"f2113d6b-377f-426a-9886-d0cd608558b1\") " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.678380 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-logs" (OuterVolumeSpecName: "logs") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.678530 4624 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.693832 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-scripts" (OuterVolumeSpecName: "scripts") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.693985 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.708299 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2113d6b-377f-426a-9886-d0cd608558b1-kube-api-access-pwvth" (OuterVolumeSpecName: "kube-api-access-pwvth") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "kube-api-access-pwvth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.778213 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.787295 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2113d6b-377f-426a-9886-d0cd608558b1-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.787326 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.787334 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.787344 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwvth\" (UniqueName: \"kubernetes.io/projected/f2113d6b-377f-426a-9886-d0cd608558b1-kube-api-access-pwvth\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.787368 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.816151 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-config-data" (OuterVolumeSpecName: "config-data") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.854903 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.882260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f2113d6b-377f-426a-9886-d0cd608558b1" (UID: "f2113d6b-377f-426a-9886-d0cd608558b1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.889455 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.889490 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2113d6b-377f-426a-9886-d0cd608558b1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:48 crc kubenswrapper[4624]: I0228 03:56:48.889502 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.119392 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca","Type":"ContainerStarted","Data":"ed1788f41e1104b6dcd0ecd3a54698d8fb9e496fa986815fd2bc7937dcf73bb1"} Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.130922 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2113d6b-377f-426a-9886-d0cd608558b1","Type":"ContainerDied","Data":"842faf43ecb5cfbf57046317d937dbd59116fb966263c794bc46f822246d5823"} Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.131007 4624 scope.go:117] "RemoveContainer" containerID="9170f7af524ba32be9556b97fb7fe1997d75397cc1cab90b26745109e85b852f" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.131199 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.148472 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerStarted","Data":"7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0"} Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.148740 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-central-agent" containerID="cri-o://cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb" gracePeriod=30 Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.148796 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.148855 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="sg-core" containerID="cri-o://bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5" gracePeriod=30 Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.148967 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-notification-agent" containerID="cri-o://15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73" gracePeriod=30 Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.149018 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="proxy-httpd" containerID="cri-o://7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0" gracePeriod=30 Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.208107 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.129783588 podStartE2EDuration="8.208061839s" podCreationTimestamp="2026-02-28 03:56:41 +0000 UTC" firstStartedPulling="2026-02-28 03:56:42.080938899 +0000 UTC m=+1256.744978198" lastFinishedPulling="2026-02-28 03:56:48.15921714 +0000 UTC m=+1262.823256449" observedRunningTime="2026-02-28 03:56:49.191446068 +0000 UTC m=+1263.855485377" watchObservedRunningTime="2026-02-28 03:56:49.208061839 +0000 UTC m=+1263.872101148" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.250784 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.282273 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.284564 4624 scope.go:117] "RemoveContainer" containerID="68b8efad63e5578a813fc70881d06e065546606f2906634fb5d2972b3ed539a8" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.309156 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:56:49 crc kubenswrapper[4624]: E0228 03:56:49.309976 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-httpd" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.310070 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-httpd" Feb 28 03:56:49 crc kubenswrapper[4624]: E0228 03:56:49.310186 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-log" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.310268 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-log" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.310559 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-httpd" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.310660 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" containerName="glance-log" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.311898 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.317723 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.317979 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.326946 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400543 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400627 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a94cc3e1-53ad-429f-b778-ae8941ba8085-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400692 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400719 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a94cc3e1-53ad-429f-b778-ae8941ba8085-logs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400747 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/a94cc3e1-53ad-429f-b778-ae8941ba8085-kube-api-access-sb8gs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400776 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.400860 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.502990 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503057 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a94cc3e1-53ad-429f-b778-ae8941ba8085-logs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503111 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/a94cc3e1-53ad-429f-b778-ae8941ba8085-kube-api-access-sb8gs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503151 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503223 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503252 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503274 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503321 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a94cc3e1-53ad-429f-b778-ae8941ba8085-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.503826 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a94cc3e1-53ad-429f-b778-ae8941ba8085-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.504629 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.504732 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a94cc3e1-53ad-429f-b778-ae8941ba8085-logs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.515396 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.515596 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.523338 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.540556 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a94cc3e1-53ad-429f-b778-ae8941ba8085-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.624523 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.632358 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8gs\" (UniqueName: \"kubernetes.io/projected/a94cc3e1-53ad-429f-b778-ae8941ba8085-kube-api-access-sb8gs\") pod \"glance-default-internal-api-0\" (UID: \"a94cc3e1-53ad-429f-b778-ae8941ba8085\") " pod="openstack/glance-default-internal-api-0" Feb 28 03:56:49 crc kubenswrapper[4624]: I0228 03:56:49.685711 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.104296 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2113d6b-377f-426a-9886-d0cd608558b1" path="/var/lib/kubelet/pods/f2113d6b-377f-426a-9886-d0cd608558b1/volumes" Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.199853 4624 generic.go:334] "Generic (PLEG): container finished" podID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerID="7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0" exitCode=0 Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.199899 4624 generic.go:334] "Generic (PLEG): container finished" podID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerID="bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5" exitCode=2 Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.199910 4624 generic.go:334] "Generic (PLEG): container finished" podID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerID="15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73" exitCode=0 Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.199992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerDied","Data":"7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0"} Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.200033 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerDied","Data":"bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5"} Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.200044 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerDied","Data":"15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73"} Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.206489 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca","Type":"ContainerStarted","Data":"559135460dac3312effe1ea89d1cf95c545545f13f4d1a303caf0f072c34e11f"} Feb 28 03:56:50 crc kubenswrapper[4624]: I0228 03:56:50.447291 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 03:56:50 crc kubenswrapper[4624]: W0228 03:56:50.471169 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda94cc3e1_53ad_429f_b778_ae8941ba8085.slice/crio-389bf1ded2211a7ed762b19f6f964df0b8d51adf332c219fd739c71ea6a94b6c WatchSource:0}: Error finding container 389bf1ded2211a7ed762b19f6f964df0b8d51adf332c219fd739c71ea6a94b6c: Status 404 returned error can't find the container with id 389bf1ded2211a7ed762b19f6f964df0b8d51adf332c219fd739c71ea6a94b6c Feb 28 03:56:51 crc kubenswrapper[4624]: I0228 03:56:51.221397 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"593dc11b-6b54-49d4-b9d9-c233b6ecd3ca","Type":"ContainerStarted","Data":"8cd62c0a1209d425283d23c8604022abe53362f2c283bed1310e10fca7057a64"} Feb 28 03:56:51 crc kubenswrapper[4624]: I0228 03:56:51.223467 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a94cc3e1-53ad-429f-b778-ae8941ba8085","Type":"ContainerStarted","Data":"389bf1ded2211a7ed762b19f6f964df0b8d51adf332c219fd739c71ea6a94b6c"} Feb 28 03:56:51 crc kubenswrapper[4624]: I0228 03:56:51.262126 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.262105212 podStartE2EDuration="4.262105212s" podCreationTimestamp="2026-02-28 03:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:51.252615414 +0000 UTC m=+1265.916654723" watchObservedRunningTime="2026-02-28 03:56:51.262105212 +0000 UTC m=+1265.926144521" Feb 28 03:56:52 crc kubenswrapper[4624]: I0228 03:56:52.241164 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a94cc3e1-53ad-429f-b778-ae8941ba8085","Type":"ContainerStarted","Data":"e35a9d17472c68b9a4972c7a5ed8638b5067ad93c68ebfbeb61407d9792f0fdf"} Feb 28 03:56:53 crc kubenswrapper[4624]: I0228 03:56:53.253680 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a94cc3e1-53ad-429f-b778-ae8941ba8085","Type":"ContainerStarted","Data":"966cd17a6de71000211d035b16c648734ee03b6a71085402e88274565f3613bd"} Feb 28 03:56:53 crc kubenswrapper[4624]: I0228 03:56:53.255564 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fr975" event={"ID":"46302b23-1f0a-4e63-948a-bcc402ca3dc1","Type":"ContainerStarted","Data":"01da1f38ecf1123e77721a9acf0f2037630441a4a32b26ee600456fab409c75e"} Feb 28 03:56:53 crc kubenswrapper[4624]: I0228 03:56:53.281442 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.281423691 podStartE2EDuration="4.281423691s" podCreationTimestamp="2026-02-28 03:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:56:53.276475288 +0000 UTC m=+1267.940514597" watchObservedRunningTime="2026-02-28 03:56:53.281423691 +0000 UTC m=+1267.945463000" Feb 28 03:56:53 crc kubenswrapper[4624]: I0228 03:56:53.307563 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fr975" podStartSLOduration=3.237149074 podStartE2EDuration="32.307530931s" podCreationTimestamp="2026-02-28 03:56:21 +0000 UTC" firstStartedPulling="2026-02-28 03:56:22.736893667 +0000 UTC m=+1237.400932986" lastFinishedPulling="2026-02-28 03:56:51.807275544 +0000 UTC m=+1266.471314843" observedRunningTime="2026-02-28 03:56:53.306377349 +0000 UTC m=+1267.970416658" watchObservedRunningTime="2026-02-28 03:56:53.307530931 +0000 UTC m=+1267.971570240" Feb 28 03:56:56 crc kubenswrapper[4624]: I0228 03:56:56.949037 4624 scope.go:117] "RemoveContainer" containerID="2305eda414191aa7796a7e7928f155097172c0d31b19b4fdd79d7f26e2e3bf41" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.275274 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.314904 4624 generic.go:334] "Generic (PLEG): container finished" podID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerID="cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb" exitCode=0 Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.314996 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerDied","Data":"cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb"} Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.315041 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510","Type":"ContainerDied","Data":"6c8d499cb470783fde62db8ec1718750c97614fb6c4e1a306fd95ff9009ba795"} Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.315066 4624 scope.go:117] "RemoveContainer" containerID="7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.315292 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373410 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-combined-ca-bundle\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-config-data\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373645 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-log-httpd\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373686 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6ttb\" (UniqueName: \"kubernetes.io/projected/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-kube-api-access-t6ttb\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373773 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-run-httpd\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373791 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-scripts\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.373856 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-sg-core-conf-yaml\") pod \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\" (UID: \"a16ce1e5-8984-4e2e-9e7a-4950a9e8e510\") " Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.374665 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.375694 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.379355 4624 scope.go:117] "RemoveContainer" containerID="bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.384425 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-scripts" (OuterVolumeSpecName: "scripts") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.388351 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-kube-api-access-t6ttb" (OuterVolumeSpecName: "kube-api-access-t6ttb") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "kube-api-access-t6ttb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.448450 4624 scope.go:117] "RemoveContainer" containerID="15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.451718 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.475925 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.475960 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6ttb\" (UniqueName: \"kubernetes.io/projected/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-kube-api-access-t6ttb\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.475972 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.475981 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.475990 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.481726 4624 scope.go:117] "RemoveContainer" containerID="cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.504188 4624 scope.go:117] "RemoveContainer" containerID="7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.505512 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0\": container with ID starting with 7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0 not found: ID does not exist" containerID="7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.505570 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0"} err="failed to get container status \"7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0\": rpc error: code = NotFound desc = could not find container \"7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0\": container with ID starting with 7d029a5976d43256ab730a41713fb034b740eb4de127827c32c566eed22cc4f0 not found: ID does not exist" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.505611 4624 scope.go:117] "RemoveContainer" containerID="bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.506236 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5\": container with ID starting with bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5 not found: ID does not exist" containerID="bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.506300 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5"} err="failed to get container status \"bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5\": rpc error: code = NotFound desc = could not find container \"bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5\": container with ID starting with bf4624968560a5ce3cd212d55b8b56cf41ca2a0e5d69f3195d87cf568b3ba9b5 not found: ID does not exist" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.506350 4624 scope.go:117] "RemoveContainer" containerID="15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.506700 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73\": container with ID starting with 15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73 not found: ID does not exist" containerID="15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.506732 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73"} err="failed to get container status \"15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73\": rpc error: code = NotFound desc = could not find container \"15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73\": container with ID starting with 15b9fabb3008463c28ae736da03368976048af1bfdd9aa5d5185445253ecec73 not found: ID does not exist" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.506753 4624 scope.go:117] "RemoveContainer" containerID="cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.507165 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb\": container with ID starting with cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb not found: ID does not exist" containerID="cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.507200 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb"} err="failed to get container status \"cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb\": rpc error: code = NotFound desc = could not find container \"cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb\": container with ID starting with cfb060c7537a91e9148e503f65a7bf471b15edb4e9931f3e831bda570ae5c9bb not found: ID does not exist" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.524019 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.540342 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-config-data" (OuterVolumeSpecName: "config-data") pod "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" (UID: "a16ce1e5-8984-4e2e-9e7a-4950a9e8e510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.578204 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.578266 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.647585 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.648685 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.650458 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.667450 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.687256 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.687989 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-central-agent" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688021 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-central-agent" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.688036 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-notification-agent" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688042 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-notification-agent" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.688061 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="sg-core" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688068 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="sg-core" Feb 28 03:56:57 crc kubenswrapper[4624]: E0228 03:56:57.688114 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="proxy-httpd" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688122 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="proxy-httpd" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688406 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="sg-core" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688451 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-central-agent" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688461 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="ceilometer-notification-agent" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.688473 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" containerName="proxy-httpd" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.692381 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.695143 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.696031 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.720723 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.747331 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.781926 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-config-data\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.781963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-run-httpd\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.782028 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.782060 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8wl\" (UniqueName: \"kubernetes.io/projected/cf86c54d-fa0d-4542-8590-af16e91ac1a5-kube-api-access-sm8wl\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.782126 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.782168 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-log-httpd\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.782209 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-scripts\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.796385 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883568 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-scripts\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883635 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-config-data\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883653 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-run-httpd\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883697 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883727 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm8wl\" (UniqueName: \"kubernetes.io/projected/cf86c54d-fa0d-4542-8590-af16e91ac1a5-kube-api-access-sm8wl\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883774 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.883807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-log-httpd\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.884652 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-log-httpd\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.884753 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-run-httpd\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.891566 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.893570 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-scripts\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.894924 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-config-data\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.895957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:57 crc kubenswrapper[4624]: I0228 03:56:57.901941 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm8wl\" (UniqueName: \"kubernetes.io/projected/cf86c54d-fa0d-4542-8590-af16e91ac1a5-kube-api-access-sm8wl\") pod \"ceilometer-0\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " pod="openstack/ceilometer-0" Feb 28 03:56:58 crc kubenswrapper[4624]: I0228 03:56:58.042747 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:56:58 crc kubenswrapper[4624]: I0228 03:56:58.098052 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a16ce1e5-8984-4e2e-9e7a-4950a9e8e510" path="/var/lib/kubelet/pods/a16ce1e5-8984-4e2e-9e7a-4950a9e8e510/volumes" Feb 28 03:56:58 crc kubenswrapper[4624]: I0228 03:56:58.350653 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 03:56:58 crc kubenswrapper[4624]: I0228 03:56:58.351225 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 03:56:58 crc kubenswrapper[4624]: I0228 03:56:58.594266 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:56:59 crc kubenswrapper[4624]: I0228 03:56:59.363176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerStarted","Data":"40338ca9e29c6992b06cb7d6d97f2403f1ecac4462ca7983a8d0df02fa0ce87c"} Feb 28 03:56:59 crc kubenswrapper[4624]: I0228 03:56:59.364724 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerStarted","Data":"09ae68f230e76df44160b29676296f58744675d93f3398e9898f359b197c96ce"} Feb 28 03:56:59 crc kubenswrapper[4624]: I0228 03:56:59.686795 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:59 crc kubenswrapper[4624]: I0228 03:56:59.687119 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:59 crc kubenswrapper[4624]: I0228 03:56:59.764809 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 03:56:59 crc kubenswrapper[4624]: I0228 03:56:59.803310 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 03:57:00 crc kubenswrapper[4624]: I0228 03:57:00.386371 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:57:00 crc kubenswrapper[4624]: I0228 03:57:00.386860 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:57:00 crc kubenswrapper[4624]: I0228 03:57:00.386573 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerStarted","Data":"7b3ab616871c0ba689e9639311535e1df4c2812734e447b8e154e54b3bf866ae"} Feb 28 03:57:00 crc kubenswrapper[4624]: I0228 03:57:00.388069 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 03:57:00 crc kubenswrapper[4624]: I0228 03:57:00.388303 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 03:57:01 crc kubenswrapper[4624]: I0228 03:57:01.156183 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 03:57:01 crc kubenswrapper[4624]: I0228 03:57:01.403921 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerStarted","Data":"5f9712b0d7ab9c460878e9f5eb2fc3b3292cb7b4cef0ff01aac69d323eed9931"} Feb 28 03:57:01 crc kubenswrapper[4624]: I0228 03:57:01.404532 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:57:01 crc kubenswrapper[4624]: I0228 03:57:01.605786 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 03:57:02 crc kubenswrapper[4624]: I0228 03:57:02.415386 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:57:02 crc kubenswrapper[4624]: I0228 03:57:02.415845 4624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 03:57:03 crc kubenswrapper[4624]: I0228 03:57:03.105326 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 03:57:03 crc kubenswrapper[4624]: I0228 03:57:03.107007 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 03:57:03 crc kubenswrapper[4624]: I0228 03:57:03.439579 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerStarted","Data":"aee1d89b13595a738d7db1da088f9e53acf1b5d7be46fc7256a7f6e99f3277ba"} Feb 28 03:57:03 crc kubenswrapper[4624]: I0228 03:57:03.439644 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:57:03 crc kubenswrapper[4624]: I0228 03:57:03.465303 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.525745952 podStartE2EDuration="6.46528555s" podCreationTimestamp="2026-02-28 03:56:57 +0000 UTC" firstStartedPulling="2026-02-28 03:56:58.614240572 +0000 UTC m=+1273.278279871" lastFinishedPulling="2026-02-28 03:57:02.55378016 +0000 UTC m=+1277.217819469" observedRunningTime="2026-02-28 03:57:03.460578922 +0000 UTC m=+1278.124618231" watchObservedRunningTime="2026-02-28 03:57:03.46528555 +0000 UTC m=+1278.129324859" Feb 28 03:57:08 crc kubenswrapper[4624]: I0228 03:57:08.485750 4624 generic.go:334] "Generic (PLEG): container finished" podID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" containerID="01da1f38ecf1123e77721a9acf0f2037630441a4a32b26ee600456fab409c75e" exitCode=0 Feb 28 03:57:08 crc kubenswrapper[4624]: I0228 03:57:08.485850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fr975" event={"ID":"46302b23-1f0a-4e63-948a-bcc402ca3dc1","Type":"ContainerDied","Data":"01da1f38ecf1123e77721a9acf0f2037630441a4a32b26ee600456fab409c75e"} Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.875933 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.962914 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-config-data\") pod \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.963579 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnlqs\" (UniqueName: \"kubernetes.io/projected/46302b23-1f0a-4e63-948a-bcc402ca3dc1-kube-api-access-pnlqs\") pod \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.965198 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-scripts\") pod \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.965427 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-combined-ca-bundle\") pod \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\" (UID: \"46302b23-1f0a-4e63-948a-bcc402ca3dc1\") " Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.975026 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46302b23-1f0a-4e63-948a-bcc402ca3dc1-kube-api-access-pnlqs" (OuterVolumeSpecName: "kube-api-access-pnlqs") pod "46302b23-1f0a-4e63-948a-bcc402ca3dc1" (UID: "46302b23-1f0a-4e63-948a-bcc402ca3dc1"). InnerVolumeSpecName "kube-api-access-pnlqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.987596 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-scripts" (OuterVolumeSpecName: "scripts") pod "46302b23-1f0a-4e63-948a-bcc402ca3dc1" (UID: "46302b23-1f0a-4e63-948a-bcc402ca3dc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:09 crc kubenswrapper[4624]: I0228 03:57:09.998853 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46302b23-1f0a-4e63-948a-bcc402ca3dc1" (UID: "46302b23-1f0a-4e63-948a-bcc402ca3dc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.010180 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-config-data" (OuterVolumeSpecName: "config-data") pod "46302b23-1f0a-4e63-948a-bcc402ca3dc1" (UID: "46302b23-1f0a-4e63-948a-bcc402ca3dc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.068777 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.068827 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.068837 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnlqs\" (UniqueName: \"kubernetes.io/projected/46302b23-1f0a-4e63-948a-bcc402ca3dc1-kube-api-access-pnlqs\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.068848 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46302b23-1f0a-4e63-948a-bcc402ca3dc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.204496 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.205137 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-central-agent" containerID="cri-o://40338ca9e29c6992b06cb7d6d97f2403f1ecac4462ca7983a8d0df02fa0ce87c" gracePeriod=30 Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.205243 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="proxy-httpd" containerID="cri-o://aee1d89b13595a738d7db1da088f9e53acf1b5d7be46fc7256a7f6e99f3277ba" gracePeriod=30 Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.205208 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="sg-core" containerID="cri-o://5f9712b0d7ab9c460878e9f5eb2fc3b3292cb7b4cef0ff01aac69d323eed9931" gracePeriod=30 Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.205291 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-notification-agent" containerID="cri-o://7b3ab616871c0ba689e9639311535e1df4c2812734e447b8e154e54b3bf866ae" gracePeriod=30 Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.515203 4624 generic.go:334] "Generic (PLEG): container finished" podID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerID="aee1d89b13595a738d7db1da088f9e53acf1b5d7be46fc7256a7f6e99f3277ba" exitCode=0 Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.515272 4624 generic.go:334] "Generic (PLEG): container finished" podID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerID="5f9712b0d7ab9c460878e9f5eb2fc3b3292cb7b4cef0ff01aac69d323eed9931" exitCode=2 Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.515232 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerDied","Data":"aee1d89b13595a738d7db1da088f9e53acf1b5d7be46fc7256a7f6e99f3277ba"} Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.515370 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerDied","Data":"5f9712b0d7ab9c460878e9f5eb2fc3b3292cb7b4cef0ff01aac69d323eed9931"} Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.516718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fr975" event={"ID":"46302b23-1f0a-4e63-948a-bcc402ca3dc1","Type":"ContainerDied","Data":"988048281e39709bddf93932e08846135889ff9074b80aa3814e4b217608b461"} Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.516744 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988048281e39709bddf93932e08846135889ff9074b80aa3814e4b217608b461" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.516823 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fr975" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.685925 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 03:57:10 crc kubenswrapper[4624]: E0228 03:57:10.686376 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" containerName="nova-cell0-conductor-db-sync" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.686394 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" containerName="nova-cell0-conductor-db-sync" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.686589 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" containerName="nova-cell0-conductor-db-sync" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.687669 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.690676 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nwwpd" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.691736 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.703400 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.783450 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.783608 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.783716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s6q9\" (UniqueName: \"kubernetes.io/projected/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-kube-api-access-8s6q9\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.886306 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.887509 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s6q9\" (UniqueName: \"kubernetes.io/projected/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-kube-api-access-8s6q9\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.887871 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.893320 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.894987 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:10 crc kubenswrapper[4624]: I0228 03:57:10.914481 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s6q9\" (UniqueName: \"kubernetes.io/projected/6d7f478f-8240-4fb8-8cfe-5b2e16c55b21-kube-api-access-8s6q9\") pod \"nova-cell0-conductor-0\" (UID: \"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21\") " pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.058877 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.533735 4624 generic.go:334] "Generic (PLEG): container finished" podID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerID="7b3ab616871c0ba689e9639311535e1df4c2812734e447b8e154e54b3bf866ae" exitCode=0 Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.534115 4624 generic.go:334] "Generic (PLEG): container finished" podID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerID="40338ca9e29c6992b06cb7d6d97f2403f1ecac4462ca7983a8d0df02fa0ce87c" exitCode=0 Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.534139 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerDied","Data":"7b3ab616871c0ba689e9639311535e1df4c2812734e447b8e154e54b3bf866ae"} Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.534169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerDied","Data":"40338ca9e29c6992b06cb7d6d97f2403f1ecac4462ca7983a8d0df02fa0ce87c"} Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.536438 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.585952 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.605899 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm8wl\" (UniqueName: \"kubernetes.io/projected/cf86c54d-fa0d-4542-8590-af16e91ac1a5-kube-api-access-sm8wl\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.605981 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-scripts\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.606006 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-sg-core-conf-yaml\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.606070 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-combined-ca-bundle\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.606194 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-config-data\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.606215 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-run-httpd\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.606359 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-log-httpd\") pod \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\" (UID: \"cf86c54d-fa0d-4542-8590-af16e91ac1a5\") " Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.607805 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.608159 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.612433 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf86c54d-fa0d-4542-8590-af16e91ac1a5-kube-api-access-sm8wl" (OuterVolumeSpecName: "kube-api-access-sm8wl") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "kube-api-access-sm8wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.614641 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-scripts" (OuterVolumeSpecName: "scripts") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.647103 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.708739 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm8wl\" (UniqueName: \"kubernetes.io/projected/cf86c54d-fa0d-4542-8590-af16e91ac1a5-kube-api-access-sm8wl\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.708784 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.708802 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.708813 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.708824 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf86c54d-fa0d-4542-8590-af16e91ac1a5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.710122 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.745041 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-config-data" (OuterVolumeSpecName: "config-data") pod "cf86c54d-fa0d-4542-8590-af16e91ac1a5" (UID: "cf86c54d-fa0d-4542-8590-af16e91ac1a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.810870 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:11 crc kubenswrapper[4624]: I0228 03:57:11.810914 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf86c54d-fa0d-4542-8590-af16e91ac1a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.557432 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf86c54d-fa0d-4542-8590-af16e91ac1a5","Type":"ContainerDied","Data":"09ae68f230e76df44160b29676296f58744675d93f3398e9898f359b197c96ce"} Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.559103 4624 scope.go:117] "RemoveContainer" containerID="aee1d89b13595a738d7db1da088f9e53acf1b5d7be46fc7256a7f6e99f3277ba" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.559379 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.568886 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21","Type":"ContainerStarted","Data":"02606812cfb2e433f4d0960dec70ed5cdee79e2cb6e9f300a2cb03a055bc10a4"} Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.569154 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6d7f478f-8240-4fb8-8cfe-5b2e16c55b21","Type":"ContainerStarted","Data":"efe8283a1da42f0ad3f5128b52b74457138efa2a8d43db20f600512f8581f4ee"} Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.569283 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.590443 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.606695 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.609776 4624 scope.go:117] "RemoveContainer" containerID="5f9712b0d7ab9c460878e9f5eb2fc3b3292cb7b4cef0ff01aac69d323eed9931" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.624142 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.624117236 podStartE2EDuration="2.624117236s" podCreationTimestamp="2026-02-28 03:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:12.620494607 +0000 UTC m=+1287.284533916" watchObservedRunningTime="2026-02-28 03:57:12.624117236 +0000 UTC m=+1287.288156535" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.647811 4624 scope.go:117] "RemoveContainer" containerID="7b3ab616871c0ba689e9639311535e1df4c2812734e447b8e154e54b3bf866ae" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.662443 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:12 crc kubenswrapper[4624]: E0228 03:57:12.662871 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-notification-agent" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.662884 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-notification-agent" Feb 28 03:57:12 crc kubenswrapper[4624]: E0228 03:57:12.662904 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-central-agent" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.662924 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-central-agent" Feb 28 03:57:12 crc kubenswrapper[4624]: E0228 03:57:12.662939 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="sg-core" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.662945 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="sg-core" Feb 28 03:57:12 crc kubenswrapper[4624]: E0228 03:57:12.662970 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="proxy-httpd" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.662975 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="proxy-httpd" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.663159 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-central-agent" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.663171 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="sg-core" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.663182 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="proxy-httpd" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.663193 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" containerName="ceilometer-notification-agent" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.664963 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.669067 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.679713 4624 scope.go:117] "RemoveContainer" containerID="40338ca9e29c6992b06cb7d6d97f2403f1ecac4462ca7983a8d0df02fa0ce87c" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.680516 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.716669 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.751658 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.751912 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-config-data\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.752055 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-log-httpd\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.752109 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-run-httpd\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.752177 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-scripts\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.752252 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.752963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcd7j\" (UniqueName: \"kubernetes.io/projected/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-kube-api-access-hcd7j\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855661 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-config-data\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855724 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-log-httpd\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855746 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-run-httpd\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855771 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-scripts\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855798 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855829 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcd7j\" (UniqueName: \"kubernetes.io/projected/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-kube-api-access-hcd7j\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.855878 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.857480 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-log-httpd\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.857539 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-run-httpd\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.863192 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.865896 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-scripts\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.866588 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.874162 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-config-data\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:12 crc kubenswrapper[4624]: I0228 03:57:12.876933 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcd7j\" (UniqueName: \"kubernetes.io/projected/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-kube-api-access-hcd7j\") pod \"ceilometer-0\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " pod="openstack/ceilometer-0" Feb 28 03:57:13 crc kubenswrapper[4624]: I0228 03:57:13.026530 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:13 crc kubenswrapper[4624]: I0228 03:57:13.589611 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:13 crc kubenswrapper[4624]: I0228 03:57:13.596620 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerStarted","Data":"6f25392dc71b29e583972bc87e99fa3484755f663552e2b04353dc74b5f94ab7"} Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.102740 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf86c54d-fa0d-4542-8590-af16e91ac1a5" path="/var/lib/kubelet/pods/cf86c54d-fa0d-4542-8590-af16e91ac1a5/volumes" Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.609702 4624 generic.go:334] "Generic (PLEG): container finished" podID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerID="940754d093426155a0bfd9f597844251410b1b7303859424203ebbb0061de2e3" exitCode=137 Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.609778 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerDied","Data":"940754d093426155a0bfd9f597844251410b1b7303859424203ebbb0061de2e3"} Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.611149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerStarted","Data":"044c0669362491674e475d7359b88b571d3a8160c975f83c5501cc2c9a104e26"} Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.611234 4624 scope.go:117] "RemoveContainer" containerID="5fe9a71caf855bba85efda4f9cf05f256d26da38058ec310c3694ad3a81fae43" Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.615277 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerID="7f2e0d50f88199a063c27e79239b0538235bbf6111c30dfd8d0c32d33e145b7c" exitCode=137 Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.615330 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerDied","Data":"7f2e0d50f88199a063c27e79239b0538235bbf6111c30dfd8d0c32d33e145b7c"} Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.615359 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerStarted","Data":"93a0882fc2c46a1d7ae2f33f5d46e75723aa417d1b35f6b58392edea81f40dce"} Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.618259 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerStarted","Data":"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926"} Feb 28 03:57:14 crc kubenswrapper[4624]: I0228 03:57:14.835726 4624 scope.go:117] "RemoveContainer" containerID="0b99c2580069b02a4a68d0236b0e6f952ce768355aa02ba24cda3004836bf163" Feb 28 03:57:15 crc kubenswrapper[4624]: I0228 03:57:15.664929 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerStarted","Data":"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131"} Feb 28 03:57:16 crc kubenswrapper[4624]: I0228 03:57:16.109509 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 28 03:57:16 crc kubenswrapper[4624]: I0228 03:57:16.675529 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerStarted","Data":"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d"} Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.169696 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-59ccl"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.171422 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.174593 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.179230 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.179827 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-59ccl"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.261353 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.261438 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-config-data\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.261496 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld86h\" (UniqueName: \"kubernetes.io/projected/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-kube-api-access-ld86h\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.261572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-scripts\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.364997 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-config-data\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.365105 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld86h\" (UniqueName: \"kubernetes.io/projected/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-kube-api-access-ld86h\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.365178 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-scripts\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.365247 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.377123 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.378537 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-config-data\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.392983 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-scripts\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.405491 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld86h\" (UniqueName: \"kubernetes.io/projected/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-kube-api-access-ld86h\") pod \"nova-cell0-cell-mapping-59ccl\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.502903 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.509125 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.510903 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.520639 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.574552 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.615337 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.632497 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.644663 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.677850 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-config-data\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.679266 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9z8s\" (UniqueName: \"kubernetes.io/projected/2ea01709-ad14-4f28-beba-f1f2b24394a4-kube-api-access-n9z8s\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.679372 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-config-data\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.679494 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.679758 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k99q\" (UniqueName: \"kubernetes.io/projected/1ed3527b-0935-4801-b27d-df8653a25097-kube-api-access-4k99q\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.679849 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.680021 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea01709-ad14-4f28-beba-f1f2b24394a4-logs\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.680339 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.748830 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.750717 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.763374 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.781877 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-config-data\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791318 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9z8s\" (UniqueName: \"kubernetes.io/projected/2ea01709-ad14-4f28-beba-f1f2b24394a4-kube-api-access-n9z8s\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791457 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-config-data\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791646 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791709 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k99q\" (UniqueName: \"kubernetes.io/projected/1ed3527b-0935-4801-b27d-df8653a25097-kube-api-access-4k99q\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791819 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcwp\" (UniqueName: \"kubernetes.io/projected/dc271f84-9b09-4ac0-a69f-7bff0adb7498-kube-api-access-srcwp\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791853 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791890 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791916 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea01709-ad14-4f28-beba-f1f2b24394a4-logs\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.791988 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc271f84-9b09-4ac0-a69f-7bff0adb7498-logs\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.792067 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-config-data\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.796622 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea01709-ad14-4f28-beba-f1f2b24394a4-logs\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.807625 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-config-data\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.812242 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.817945 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.839828 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-config-data\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.852878 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.868390 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k99q\" (UniqueName: \"kubernetes.io/projected/1ed3527b-0935-4801-b27d-df8653a25097-kube-api-access-4k99q\") pod \"nova-scheduler-0\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.869135 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9z8s\" (UniqueName: \"kubernetes.io/projected/2ea01709-ad14-4f28-beba-f1f2b24394a4-kube-api-access-n9z8s\") pod \"nova-metadata-0\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " pod="openstack/nova-metadata-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.897551 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcwp\" (UniqueName: \"kubernetes.io/projected/dc271f84-9b09-4ac0-a69f-7bff0adb7498-kube-api-access-srcwp\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.897609 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.897653 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc271f84-9b09-4ac0-a69f-7bff0adb7498-logs\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.897698 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-config-data\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.901817 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc271f84-9b09-4ac0-a69f-7bff0adb7498-logs\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.901940 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-config-data\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.906188 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:17 crc kubenswrapper[4624]: I0228 03:57:17.940202 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcwp\" (UniqueName: \"kubernetes.io/projected/dc271f84-9b09-4ac0-a69f-7bff0adb7498-kube-api-access-srcwp\") pod \"nova-api-0\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " pod="openstack/nova-api-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.003531 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-vtffs"] Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.005399 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.009653 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-svc\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.009693 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.009738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9kv\" (UniqueName: \"kubernetes.io/projected/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-kube-api-access-4n9kv\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.009764 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.009800 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-config\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.009871 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.022932 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.049298 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.096679 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-vtffs"] Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.157363 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.169911 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-svc\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.169969 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.170012 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9kv\" (UniqueName: \"kubernetes.io/projected/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-kube-api-access-4n9kv\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.170042 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.170115 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-config\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.170195 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.256230 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.256885 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.273344 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-svc\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.274754 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-config\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.280571 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.344485 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9kv\" (UniqueName: \"kubernetes.io/projected/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-kube-api-access-4n9kv\") pod \"dnsmasq-dns-865f5d856f-vtffs\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.454074 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.471192 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.474671 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.474794 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.497830 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.606368 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcjzs\" (UniqueName: \"kubernetes.io/projected/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-kube-api-access-dcjzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.606909 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.606943 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.775125 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcjzs\" (UniqueName: \"kubernetes.io/projected/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-kube-api-access-dcjzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.775599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.775674 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.802628 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.815275 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-59ccl"] Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.821077 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:18 crc kubenswrapper[4624]: I0228 03:57:18.843768 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcjzs\" (UniqueName: \"kubernetes.io/projected/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-kube-api-access-dcjzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.010795 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.047679 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.176168 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.531715 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-vtffs"] Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.555623 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.828649 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.886492 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc271f84-9b09-4ac0-a69f-7bff0adb7498","Type":"ContainerStarted","Data":"b4efe367e9702d1637bdb1ac4a4364745b12ebf9f91569c7e8f64e685f631b8b"} Feb 28 03:57:19 crc kubenswrapper[4624]: I0228 03:57:19.935502 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ea01709-ad14-4f28-beba-f1f2b24394a4","Type":"ContainerStarted","Data":"e17b454218b3d2d98fc3e97385282aa45dac505a8a388729439f5fcf89a6c97f"} Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.006364 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerStarted","Data":"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0"} Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.006787 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.015308 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ed3527b-0935-4801-b27d-df8653a25097","Type":"ContainerStarted","Data":"be073112661ccab6f4bdaf54427937649cbe06b7fac690274853262144359801"} Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.048916 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-59ccl" event={"ID":"e46fc9ec-81fe-446c-8592-9fcb0802aeb0","Type":"ContainerStarted","Data":"04e0cdc7b27ada011157ecf57a8735e5c646c9fc5b2df62b897977b4f85e9c38"} Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.048974 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-59ccl" event={"ID":"e46fc9ec-81fe-446c-8592-9fcb0802aeb0","Type":"ContainerStarted","Data":"1d6e8b7794faa2d72eab16e3421ff09685a500a6e70721f5e10b90bfc3dabaca"} Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.066567 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" event={"ID":"3ac07dd6-4fe7-4a79-a083-cf6b01a50026","Type":"ContainerStarted","Data":"c62c328f64a6799746c7dfe50e59fc2826a354786b2c777dedeeb9aaf5e395a7"} Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.078451 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.821127663 podStartE2EDuration="8.07841965s" podCreationTimestamp="2026-02-28 03:57:12 +0000 UTC" firstStartedPulling="2026-02-28 03:57:13.555353371 +0000 UTC m=+1288.219392730" lastFinishedPulling="2026-02-28 03:57:17.812645408 +0000 UTC m=+1292.476684717" observedRunningTime="2026-02-28 03:57:20.054635444 +0000 UTC m=+1294.718674743" watchObservedRunningTime="2026-02-28 03:57:20.07841965 +0000 UTC m=+1294.742458959" Feb 28 03:57:20 crc kubenswrapper[4624]: I0228 03:57:20.093548 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-59ccl" podStartSLOduration=3.09352249 podStartE2EDuration="3.09352249s" podCreationTimestamp="2026-02-28 03:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:20.089723837 +0000 UTC m=+1294.753763146" watchObservedRunningTime="2026-02-28 03:57:20.09352249 +0000 UTC m=+1294.757561789" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.088771 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a2f3514-5076-427b-b99d-92ac7a0a0fb3","Type":"ContainerStarted","Data":"0477e860f2fde82ba414607bd8586883a3e202d49ac22f2a1024eece80f222d9"} Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.093012 4624 generic.go:334] "Generic (PLEG): container finished" podID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerID="c9fb9292421e6f8aef3835033372f0291bebcc607d6d95fdc584fbe219fc65bd" exitCode=0 Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.095176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" event={"ID":"3ac07dd6-4fe7-4a79-a083-cf6b01a50026","Type":"ContainerDied","Data":"c9fb9292421e6f8aef3835033372f0291bebcc607d6d95fdc584fbe219fc65bd"} Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.225256 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxv6f"] Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.226709 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.232762 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.233017 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.400053 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.400817 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlb6d\" (UniqueName: \"kubernetes.io/projected/dedb25a9-d021-48ae-82e9-cbf0dcba172f-kube-api-access-hlb6d\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.400944 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-scripts\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.401007 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-config-data\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.401757 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxv6f"] Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.506008 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-scripts\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.506092 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-config-data\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.506157 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.506203 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlb6d\" (UniqueName: \"kubernetes.io/projected/dedb25a9-d021-48ae-82e9-cbf0dcba172f-kube-api-access-hlb6d\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.537238 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlb6d\" (UniqueName: \"kubernetes.io/projected/dedb25a9-d021-48ae-82e9-cbf0dcba172f-kube-api-access-hlb6d\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.561036 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-config-data\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.562708 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.563342 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-scripts\") pod \"nova-cell1-conductor-db-sync-pxv6f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:21 crc kubenswrapper[4624]: I0228 03:57:21.765306 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:22 crc kubenswrapper[4624]: I0228 03:57:22.203007 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:22 crc kubenswrapper[4624]: I0228 03:57:22.225674 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:24 crc kubenswrapper[4624]: I0228 03:57:24.145813 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:57:24 crc kubenswrapper[4624]: I0228 03:57:24.146297 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:57:24 crc kubenswrapper[4624]: I0228 03:57:24.147201 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:57:24 crc kubenswrapper[4624]: I0228 03:57:24.322223 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:57:24 crc kubenswrapper[4624]: I0228 03:57:24.322275 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:57:24 crc kubenswrapper[4624]: I0228 03:57:24.323858 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:57:26 crc kubenswrapper[4624]: I0228 03:57:26.240239 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" event={"ID":"3ac07dd6-4fe7-4a79-a083-cf6b01a50026","Type":"ContainerStarted","Data":"450edef818c18497528cefc4d66f255e2cbf730bbb6fcf4d8433e6706ab4fb4f"} Feb 28 03:57:26 crc kubenswrapper[4624]: I0228 03:57:26.242955 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:26 crc kubenswrapper[4624]: I0228 03:57:26.290651 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" podStartSLOduration=9.290626761 podStartE2EDuration="9.290626761s" podCreationTimestamp="2026-02-28 03:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:26.273919666 +0000 UTC m=+1300.937958975" watchObservedRunningTime="2026-02-28 03:57:26.290626761 +0000 UTC m=+1300.954666070" Feb 28 03:57:26 crc kubenswrapper[4624]: I0228 03:57:26.317714 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxv6f"] Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.254599 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a2f3514-5076-427b-b99d-92ac7a0a0fb3","Type":"ContainerStarted","Data":"b5d12408f2083d87a38e8f1779625ed668347139bf6bb90ed1b6bc032fd4dbff"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.255507 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3a2f3514-5076-427b-b99d-92ac7a0a0fb3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b5d12408f2083d87a38e8f1779625ed668347139bf6bb90ed1b6bc032fd4dbff" gracePeriod=30 Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.272447 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ed3527b-0935-4801-b27d-df8653a25097","Type":"ContainerStarted","Data":"8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.289505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" event={"ID":"dedb25a9-d021-48ae-82e9-cbf0dcba172f","Type":"ContainerStarted","Data":"5b3667bfebc2bb818b14ec3562a28b9829d94d6e4c9b9a51edb2d5c8a41b7538"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.289563 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" event={"ID":"dedb25a9-d021-48ae-82e9-cbf0dcba172f","Type":"ContainerStarted","Data":"fb548b618f79b2ce6c34b67d07fb7823a5d908ee5c627b52c5179fe2d253cdc8"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.303526 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc271f84-9b09-4ac0-a69f-7bff0adb7498","Type":"ContainerStarted","Data":"ab713829831aac4183fb7fc2b06a85ac6ffd9aa97e5479a6eda3d6e620f83e18"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.303632 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc271f84-9b09-4ac0-a69f-7bff0adb7498","Type":"ContainerStarted","Data":"4098ae793bcd8745d2a1c6f7d3580989de0a7b4163ad49fdcf2ef9bdea8c6216"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.328837 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-log" containerID="cri-o://4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025" gracePeriod=30 Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.329272 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ea01709-ad14-4f28-beba-f1f2b24394a4","Type":"ContainerStarted","Data":"b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.329949 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ea01709-ad14-4f28-beba-f1f2b24394a4","Type":"ContainerStarted","Data":"4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025"} Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.329338 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-metadata" containerID="cri-o://b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490" gracePeriod=30 Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.343742 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.683971159 podStartE2EDuration="10.343714676s" podCreationTimestamp="2026-02-28 03:57:17 +0000 UTC" firstStartedPulling="2026-02-28 03:57:19.875740806 +0000 UTC m=+1294.539780115" lastFinishedPulling="2026-02-28 03:57:25.535484323 +0000 UTC m=+1300.199523632" observedRunningTime="2026-02-28 03:57:27.329640171 +0000 UTC m=+1301.993679480" watchObservedRunningTime="2026-02-28 03:57:27.343714676 +0000 UTC m=+1302.007753995" Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.371401 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.3911591229999996 podStartE2EDuration="10.371379122s" podCreationTimestamp="2026-02-28 03:57:17 +0000 UTC" firstStartedPulling="2026-02-28 03:57:19.561365571 +0000 UTC m=+1294.225404880" lastFinishedPulling="2026-02-28 03:57:25.54158557 +0000 UTC m=+1300.205624879" observedRunningTime="2026-02-28 03:57:27.360905083 +0000 UTC m=+1302.024944392" watchObservedRunningTime="2026-02-28 03:57:27.371379122 +0000 UTC m=+1302.035418441" Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.393448 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" podStartSLOduration=6.393429419 podStartE2EDuration="6.393429419s" podCreationTimestamp="2026-02-28 03:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:27.383179106 +0000 UTC m=+1302.047218415" watchObservedRunningTime="2026-02-28 03:57:27.393429419 +0000 UTC m=+1302.057468728" Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.433062 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.149462098 podStartE2EDuration="10.433041425s" podCreationTimestamp="2026-02-28 03:57:17 +0000 UTC" firstStartedPulling="2026-02-28 03:57:19.192458493 +0000 UTC m=+1293.856497802" lastFinishedPulling="2026-02-28 03:57:25.47603782 +0000 UTC m=+1300.140077129" observedRunningTime="2026-02-28 03:57:27.420832089 +0000 UTC m=+1302.084871398" watchObservedRunningTime="2026-02-28 03:57:27.433041425 +0000 UTC m=+1302.097080734" Feb 28 03:57:27 crc kubenswrapper[4624]: I0228 03:57:27.460676 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.022441455 podStartE2EDuration="10.460659089s" podCreationTimestamp="2026-02-28 03:57:17 +0000 UTC" firstStartedPulling="2026-02-28 03:57:19.038447282 +0000 UTC m=+1293.702486591" lastFinishedPulling="2026-02-28 03:57:25.476664906 +0000 UTC m=+1300.140704225" observedRunningTime="2026-02-28 03:57:27.456949631 +0000 UTC m=+1302.120988940" watchObservedRunningTime="2026-02-28 03:57:27.460659089 +0000 UTC m=+1302.124698398" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.023888 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.023943 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.050676 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.050728 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.103888 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.159333 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.187067 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.340044 4624 generic.go:334] "Generic (PLEG): container finished" podID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerID="4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025" exitCode=143 Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.340191 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ea01709-ad14-4f28-beba-f1f2b24394a4","Type":"ContainerDied","Data":"4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025"} Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.476544 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 03:57:28 crc kubenswrapper[4624]: I0228 03:57:28.894118 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.041521 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9z8s\" (UniqueName: \"kubernetes.io/projected/2ea01709-ad14-4f28-beba-f1f2b24394a4-kube-api-access-n9z8s\") pod \"2ea01709-ad14-4f28-beba-f1f2b24394a4\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.041754 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-config-data\") pod \"2ea01709-ad14-4f28-beba-f1f2b24394a4\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.041794 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-combined-ca-bundle\") pod \"2ea01709-ad14-4f28-beba-f1f2b24394a4\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.041935 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea01709-ad14-4f28-beba-f1f2b24394a4-logs\") pod \"2ea01709-ad14-4f28-beba-f1f2b24394a4\" (UID: \"2ea01709-ad14-4f28-beba-f1f2b24394a4\") " Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.042666 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea01709-ad14-4f28-beba-f1f2b24394a4-logs" (OuterVolumeSpecName: "logs") pod "2ea01709-ad14-4f28-beba-f1f2b24394a4" (UID: "2ea01709-ad14-4f28-beba-f1f2b24394a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.049179 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.075415 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea01709-ad14-4f28-beba-f1f2b24394a4-kube-api-access-n9z8s" (OuterVolumeSpecName: "kube-api-access-n9z8s") pod "2ea01709-ad14-4f28-beba-f1f2b24394a4" (UID: "2ea01709-ad14-4f28-beba-f1f2b24394a4"). InnerVolumeSpecName "kube-api-access-n9z8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.115003 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea01709-ad14-4f28-beba-f1f2b24394a4" (UID: "2ea01709-ad14-4f28-beba-f1f2b24394a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.121775 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-config-data" (OuterVolumeSpecName: "config-data") pod "2ea01709-ad14-4f28-beba-f1f2b24394a4" (UID: "2ea01709-ad14-4f28-beba-f1f2b24394a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.144594 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.144714 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea01709-ad14-4f28-beba-f1f2b24394a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.144799 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea01709-ad14-4f28-beba-f1f2b24394a4-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.144862 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9z8s\" (UniqueName: \"kubernetes.io/projected/2ea01709-ad14-4f28-beba-f1f2b24394a4-kube-api-access-n9z8s\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.243681 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.243756 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.370792 4624 generic.go:334] "Generic (PLEG): container finished" podID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerID="b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490" exitCode=0 Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.372067 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.372197 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ea01709-ad14-4f28-beba-f1f2b24394a4","Type":"ContainerDied","Data":"b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490"} Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.372307 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ea01709-ad14-4f28-beba-f1f2b24394a4","Type":"ContainerDied","Data":"e17b454218b3d2d98fc3e97385282aa45dac505a8a388729439f5fcf89a6c97f"} Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.372331 4624 scope.go:117] "RemoveContainer" containerID="b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.415454 4624 scope.go:117] "RemoveContainer" containerID="4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.442241 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.481471 4624 scope.go:117] "RemoveContainer" containerID="b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490" Feb 28 03:57:29 crc kubenswrapper[4624]: E0228 03:57:29.482071 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490\": container with ID starting with b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490 not found: ID does not exist" containerID="b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.482125 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490"} err="failed to get container status \"b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490\": rpc error: code = NotFound desc = could not find container \"b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490\": container with ID starting with b9768de734c4f8a4a68712376a85e00b4d2840a79bc3e7fcf927ec6e28269490 not found: ID does not exist" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.482151 4624 scope.go:117] "RemoveContainer" containerID="4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025" Feb 28 03:57:29 crc kubenswrapper[4624]: E0228 03:57:29.482339 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025\": container with ID starting with 4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025 not found: ID does not exist" containerID="4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.482356 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025"} err="failed to get container status \"4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025\": rpc error: code = NotFound desc = could not find container \"4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025\": container with ID starting with 4de73b507dcfad0c6ad1f107b0a47506361794d74d11243841353d61b8081025 not found: ID does not exist" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.487188 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.499155 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:29 crc kubenswrapper[4624]: E0228 03:57:29.499712 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-log" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.499733 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-log" Feb 28 03:57:29 crc kubenswrapper[4624]: E0228 03:57:29.499746 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-metadata" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.499755 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-metadata" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.499949 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-log" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.499979 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" containerName="nova-metadata-metadata" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.501103 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.511423 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.511705 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.521645 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.572536 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.572756 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45bmm\" (UniqueName: \"kubernetes.io/projected/7743fde0-948b-45e0-9c8c-b90ec50005e7-kube-api-access-45bmm\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.573194 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-config-data\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.573501 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7743fde0-948b-45e0-9c8c-b90ec50005e7-logs\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.573546 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.676246 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.676323 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45bmm\" (UniqueName: \"kubernetes.io/projected/7743fde0-948b-45e0-9c8c-b90ec50005e7-kube-api-access-45bmm\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.676383 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-config-data\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.676440 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7743fde0-948b-45e0-9c8c-b90ec50005e7-logs\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.676462 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.677251 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7743fde0-948b-45e0-9c8c-b90ec50005e7-logs\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.681020 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.701854 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.704075 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-config-data\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.714745 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45bmm\" (UniqueName: \"kubernetes.io/projected/7743fde0-948b-45e0-9c8c-b90ec50005e7-kube-api-access-45bmm\") pod \"nova-metadata-0\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " pod="openstack/nova-metadata-0" Feb 28 03:57:29 crc kubenswrapper[4624]: I0228 03:57:29.829576 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:30 crc kubenswrapper[4624]: I0228 03:57:30.141298 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea01709-ad14-4f28-beba-f1f2b24394a4" path="/var/lib/kubelet/pods/2ea01709-ad14-4f28-beba-f1f2b24394a4/volumes" Feb 28 03:57:30 crc kubenswrapper[4624]: I0228 03:57:30.504039 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:31 crc kubenswrapper[4624]: I0228 03:57:31.443316 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7743fde0-948b-45e0-9c8c-b90ec50005e7","Type":"ContainerStarted","Data":"c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015"} Feb 28 03:57:31 crc kubenswrapper[4624]: I0228 03:57:31.443640 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7743fde0-948b-45e0-9c8c-b90ec50005e7","Type":"ContainerStarted","Data":"211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8"} Feb 28 03:57:31 crc kubenswrapper[4624]: I0228 03:57:31.443656 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7743fde0-948b-45e0-9c8c-b90ec50005e7","Type":"ContainerStarted","Data":"d29ec654b58dcf2742510676b0f5a6cbad1089c8ca43f1dba050e37585fff00f"} Feb 28 03:57:31 crc kubenswrapper[4624]: I0228 03:57:31.496007 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.495984403 podStartE2EDuration="2.495984403s" podCreationTimestamp="2026-02-28 03:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:31.492362087 +0000 UTC m=+1306.156401386" watchObservedRunningTime="2026-02-28 03:57:31.495984403 +0000 UTC m=+1306.160023712" Feb 28 03:57:33 crc kubenswrapper[4624]: I0228 03:57:33.456355 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:33 crc kubenswrapper[4624]: I0228 03:57:33.560810 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-z26db"] Feb 28 03:57:33 crc kubenswrapper[4624]: I0228 03:57:33.561903 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerName="dnsmasq-dns" containerID="cri-o://b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21" gracePeriod=10 Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.154344 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.244829 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.306518 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-swift-storage-0\") pod \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.306868 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-sb\") pod \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.306914 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-config\") pod \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.306997 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg7dn\" (UniqueName: \"kubernetes.io/projected/be61ab6a-7cb4-40a0-9658-0c58aaeba834-kube-api-access-hg7dn\") pod \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.307023 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-svc\") pod \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.307056 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-nb\") pod \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\" (UID: \"be61ab6a-7cb4-40a0-9658-0c58aaeba834\") " Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.328609 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.340379 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be61ab6a-7cb4-40a0-9658-0c58aaeba834-kube-api-access-hg7dn" (OuterVolumeSpecName: "kube-api-access-hg7dn") pod "be61ab6a-7cb4-40a0-9658-0c58aaeba834" (UID: "be61ab6a-7cb4-40a0-9658-0c58aaeba834"). InnerVolumeSpecName "kube-api-access-hg7dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.410662 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg7dn\" (UniqueName: \"kubernetes.io/projected/be61ab6a-7cb4-40a0-9658-0c58aaeba834-kube-api-access-hg7dn\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.423383 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be61ab6a-7cb4-40a0-9658-0c58aaeba834" (UID: "be61ab6a-7cb4-40a0-9658-0c58aaeba834"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.462649 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-config" (OuterVolumeSpecName: "config") pod "be61ab6a-7cb4-40a0-9658-0c58aaeba834" (UID: "be61ab6a-7cb4-40a0-9658-0c58aaeba834"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.496771 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be61ab6a-7cb4-40a0-9658-0c58aaeba834" (UID: "be61ab6a-7cb4-40a0-9658-0c58aaeba834"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.499764 4624 generic.go:334] "Generic (PLEG): container finished" podID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerID="b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21" exitCode=0 Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.499812 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" event={"ID":"be61ab6a-7cb4-40a0-9658-0c58aaeba834","Type":"ContainerDied","Data":"b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21"} Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.499843 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" event={"ID":"be61ab6a-7cb4-40a0-9658-0c58aaeba834","Type":"ContainerDied","Data":"2cf47ca7acbfbf57e67efeb763f7d13af68e1efcb0c97546794ca4b90f4cafd7"} Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.499863 4624 scope.go:117] "RemoveContainer" containerID="b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.500040 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-z26db" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.501623 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be61ab6a-7cb4-40a0-9658-0c58aaeba834" (UID: "be61ab6a-7cb4-40a0-9658-0c58aaeba834"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.513305 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.513352 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.513365 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.513374 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.553274 4624 scope.go:117] "RemoveContainer" containerID="ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.553795 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be61ab6a-7cb4-40a0-9658-0c58aaeba834" (UID: "be61ab6a-7cb4-40a0-9658-0c58aaeba834"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.593770 4624 scope.go:117] "RemoveContainer" containerID="b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21" Feb 28 03:57:34 crc kubenswrapper[4624]: E0228 03:57:34.596733 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21\": container with ID starting with b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21 not found: ID does not exist" containerID="b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.596779 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21"} err="failed to get container status \"b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21\": rpc error: code = NotFound desc = could not find container \"b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21\": container with ID starting with b18ef48c8b43acb121ee84f9f04696811c0e0a7caf3dccbdcdb0e3287c9a6c21 not found: ID does not exist" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.596806 4624 scope.go:117] "RemoveContainer" containerID="ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9" Feb 28 03:57:34 crc kubenswrapper[4624]: E0228 03:57:34.599368 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9\": container with ID starting with ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9 not found: ID does not exist" containerID="ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.599425 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9"} err="failed to get container status \"ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9\": rpc error: code = NotFound desc = could not find container \"ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9\": container with ID starting with ae06534ec6e41fb1e88d9d6ab6677748792f7d720c61b1f83fa51137e23f66d9 not found: ID does not exist" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.615786 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be61ab6a-7cb4-40a0-9658-0c58aaeba834-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.830606 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.831215 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.856860 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-z26db"] Feb 28 03:57:34 crc kubenswrapper[4624]: I0228 03:57:34.869980 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-z26db"] Feb 28 03:57:36 crc kubenswrapper[4624]: I0228 03:57:36.100063 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" path="/var/lib/kubelet/pods/be61ab6a-7cb4-40a0-9658-0c58aaeba834/volumes" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.185336 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.186167 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.197078 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.204701 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.546698 4624 generic.go:334] "Generic (PLEG): container finished" podID="e46fc9ec-81fe-446c-8592-9fcb0802aeb0" containerID="04e0cdc7b27ada011157ecf57a8735e5c646c9fc5b2df62b897977b4f85e9c38" exitCode=0 Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.547066 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-59ccl" event={"ID":"e46fc9ec-81fe-446c-8592-9fcb0802aeb0","Type":"ContainerDied","Data":"04e0cdc7b27ada011157ecf57a8735e5c646c9fc5b2df62b897977b4f85e9c38"} Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.547905 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.573771 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.882589 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-lfdh8"] Feb 28 03:57:38 crc kubenswrapper[4624]: E0228 03:57:38.883161 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerName="init" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.883185 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerName="init" Feb 28 03:57:38 crc kubenswrapper[4624]: E0228 03:57:38.883233 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerName="dnsmasq-dns" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.883240 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerName="dnsmasq-dns" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.883472 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="be61ab6a-7cb4-40a0-9658-0c58aaeba834" containerName="dnsmasq-dns" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.886098 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:38 crc kubenswrapper[4624]: I0228 03:57:38.934194 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-lfdh8"] Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.030117 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.030334 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.030462 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6j2\" (UniqueName: \"kubernetes.io/projected/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-kube-api-access-8j6j2\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.030564 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.031052 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.031142 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-config\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.138234 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.138288 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-config\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.138314 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.138341 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.138363 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6j2\" (UniqueName: \"kubernetes.io/projected/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-kube-api-access-8j6j2\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.138396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.139467 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-config\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.139468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.140039 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.140501 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.140543 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.167425 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6j2\" (UniqueName: \"kubernetes.io/projected/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-kube-api-access-8j6j2\") pod \"dnsmasq-dns-5c7b6c5df9-lfdh8\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.212361 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.830114 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.830954 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 03:57:39 crc kubenswrapper[4624]: I0228 03:57:39.972590 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-lfdh8"] Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.403631 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.510538 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-config-data\") pod \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.510679 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-combined-ca-bundle\") pod \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.510713 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-scripts\") pod \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.510753 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld86h\" (UniqueName: \"kubernetes.io/projected/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-kube-api-access-ld86h\") pod \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\" (UID: \"e46fc9ec-81fe-446c-8592-9fcb0802aeb0\") " Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.528344 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-scripts" (OuterVolumeSpecName: "scripts") pod "e46fc9ec-81fe-446c-8592-9fcb0802aeb0" (UID: "e46fc9ec-81fe-446c-8592-9fcb0802aeb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.547445 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-kube-api-access-ld86h" (OuterVolumeSpecName: "kube-api-access-ld86h") pod "e46fc9ec-81fe-446c-8592-9fcb0802aeb0" (UID: "e46fc9ec-81fe-446c-8592-9fcb0802aeb0"). InnerVolumeSpecName "kube-api-access-ld86h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.595346 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e46fc9ec-81fe-446c-8592-9fcb0802aeb0" (UID: "e46fc9ec-81fe-446c-8592-9fcb0802aeb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.601159 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-config-data" (OuterVolumeSpecName: "config-data") pod "e46fc9ec-81fe-446c-8592-9fcb0802aeb0" (UID: "e46fc9ec-81fe-446c-8592-9fcb0802aeb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.615927 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" event={"ID":"31f2aae0-acdc-4c37-bb36-e7685b9dfe42","Type":"ContainerStarted","Data":"89ae84bfaa7a8d8e9f40ad2ff7fcba567cc06abb8194fe80160e54ff60a7e5e3"} Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.616014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" event={"ID":"31f2aae0-acdc-4c37-bb36-e7685b9dfe42","Type":"ContainerStarted","Data":"5653e232546713383d8fc126cf64eaa43b721361ed022d2d06d5aa03171b78e5"} Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.618890 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.618914 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.618929 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.618940 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld86h\" (UniqueName: \"kubernetes.io/projected/e46fc9ec-81fe-446c-8592-9fcb0802aeb0-kube-api-access-ld86h\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.640669 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-59ccl" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.644074 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-59ccl" event={"ID":"e46fc9ec-81fe-446c-8592-9fcb0802aeb0","Type":"ContainerDied","Data":"1d6e8b7794faa2d72eab16e3421ff09685a500a6e70721f5e10b90bfc3dabaca"} Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.644148 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6e8b7794faa2d72eab16e3421ff09685a500a6e70721f5e10b90bfc3dabaca" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.815007 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.815612 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1ed3527b-0935-4801-b27d-df8653a25097" containerName="nova-scheduler-scheduler" containerID="cri-o://8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f" gracePeriod=30 Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.829758 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.876877 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.877166 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-log" containerID="cri-o://211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8" gracePeriod=30 Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.877710 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-metadata" containerID="cri-o://c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015" gracePeriod=30 Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.888684 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:57:40 crc kubenswrapper[4624]: I0228 03:57:40.889115 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:57:41 crc kubenswrapper[4624]: I0228 03:57:41.651863 4624 generic.go:334] "Generic (PLEG): container finished" podID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerID="211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8" exitCode=143 Feb 28 03:57:41 crc kubenswrapper[4624]: I0228 03:57:41.651951 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7743fde0-948b-45e0-9c8c-b90ec50005e7","Type":"ContainerDied","Data":"211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8"} Feb 28 03:57:41 crc kubenswrapper[4624]: I0228 03:57:41.655043 4624 generic.go:334] "Generic (PLEG): container finished" podID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerID="89ae84bfaa7a8d8e9f40ad2ff7fcba567cc06abb8194fe80160e54ff60a7e5e3" exitCode=0 Feb 28 03:57:41 crc kubenswrapper[4624]: I0228 03:57:41.655145 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" event={"ID":"31f2aae0-acdc-4c37-bb36-e7685b9dfe42","Type":"ContainerDied","Data":"89ae84bfaa7a8d8e9f40ad2ff7fcba567cc06abb8194fe80160e54ff60a7e5e3"} Feb 28 03:57:41 crc kubenswrapper[4624]: I0228 03:57:41.655383 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-log" containerID="cri-o://4098ae793bcd8745d2a1c6f7d3580989de0a7b4163ad49fdcf2ef9bdea8c6216" gracePeriod=30 Feb 28 03:57:41 crc kubenswrapper[4624]: I0228 03:57:41.655494 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-api" containerID="cri-o://ab713829831aac4183fb7fc2b06a85ac6ffd9aa97e5479a6eda3d6e620f83e18" gracePeriod=30 Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.557651 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.668189 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-combined-ca-bundle\") pod \"1ed3527b-0935-4801-b27d-df8653a25097\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.668386 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k99q\" (UniqueName: \"kubernetes.io/projected/1ed3527b-0935-4801-b27d-df8653a25097-kube-api-access-4k99q\") pod \"1ed3527b-0935-4801-b27d-df8653a25097\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.668548 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-config-data\") pod \"1ed3527b-0935-4801-b27d-df8653a25097\" (UID: \"1ed3527b-0935-4801-b27d-df8653a25097\") " Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.680279 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" event={"ID":"31f2aae0-acdc-4c37-bb36-e7685b9dfe42","Type":"ContainerStarted","Data":"2afa3009d79d90b98db31ea963dab79228bf4287abf95a2bcfddb50176b7fa8a"} Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.680967 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.697852 4624 generic.go:334] "Generic (PLEG): container finished" podID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerID="4098ae793bcd8745d2a1c6f7d3580989de0a7b4163ad49fdcf2ef9bdea8c6216" exitCode=143 Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.697947 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc271f84-9b09-4ac0-a69f-7bff0adb7498","Type":"ContainerDied","Data":"4098ae793bcd8745d2a1c6f7d3580989de0a7b4163ad49fdcf2ef9bdea8c6216"} Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.700096 4624 generic.go:334] "Generic (PLEG): container finished" podID="1ed3527b-0935-4801-b27d-df8653a25097" containerID="8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f" exitCode=0 Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.700173 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ed3527b-0935-4801-b27d-df8653a25097","Type":"ContainerDied","Data":"8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f"} Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.700200 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1ed3527b-0935-4801-b27d-df8653a25097","Type":"ContainerDied","Data":"be073112661ccab6f4bdaf54427937649cbe06b7fac690274853262144359801"} Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.700247 4624 scope.go:117] "RemoveContainer" containerID="8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.700508 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.705361 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed3527b-0935-4801-b27d-df8653a25097-kube-api-access-4k99q" (OuterVolumeSpecName: "kube-api-access-4k99q") pod "1ed3527b-0935-4801-b27d-df8653a25097" (UID: "1ed3527b-0935-4801-b27d-df8653a25097"). InnerVolumeSpecName "kube-api-access-4k99q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.710297 4624 generic.go:334] "Generic (PLEG): container finished" podID="dedb25a9-d021-48ae-82e9-cbf0dcba172f" containerID="5b3667bfebc2bb818b14ec3562a28b9829d94d6e4c9b9a51edb2d5c8a41b7538" exitCode=0 Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.710384 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" event={"ID":"dedb25a9-d021-48ae-82e9-cbf0dcba172f","Type":"ContainerDied","Data":"5b3667bfebc2bb818b14ec3562a28b9829d94d6e4c9b9a51edb2d5c8a41b7538"} Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.742198 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ed3527b-0935-4801-b27d-df8653a25097" (UID: "1ed3527b-0935-4801-b27d-df8653a25097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.744835 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-config-data" (OuterVolumeSpecName: "config-data") pod "1ed3527b-0935-4801-b27d-df8653a25097" (UID: "1ed3527b-0935-4801-b27d-df8653a25097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.753129 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" podStartSLOduration=4.753072449 podStartE2EDuration="4.753072449s" podCreationTimestamp="2026-02-28 03:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:42.720794449 +0000 UTC m=+1317.384833758" watchObservedRunningTime="2026-02-28 03:57:42.753072449 +0000 UTC m=+1317.417111758" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.770907 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.770941 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k99q\" (UniqueName: \"kubernetes.io/projected/1ed3527b-0935-4801-b27d-df8653a25097-kube-api-access-4k99q\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.770952 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed3527b-0935-4801-b27d-df8653a25097-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.818408 4624 scope.go:117] "RemoveContainer" containerID="8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f" Feb 28 03:57:42 crc kubenswrapper[4624]: E0228 03:57:42.819845 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f\": container with ID starting with 8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f not found: ID does not exist" containerID="8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f" Feb 28 03:57:42 crc kubenswrapper[4624]: I0228 03:57:42.819908 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f"} err="failed to get container status \"8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f\": rpc error: code = NotFound desc = could not find container \"8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f\": container with ID starting with 8cc8223b87113fc0ddf46d78cd9cdf9b8e9a93f27476f7b4d10e7e149afbd32f not found: ID does not exist" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.049244 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.062840 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.121020 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:43 crc kubenswrapper[4624]: E0228 03:57:43.121546 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed3527b-0935-4801-b27d-df8653a25097" containerName="nova-scheduler-scheduler" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.121571 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed3527b-0935-4801-b27d-df8653a25097" containerName="nova-scheduler-scheduler" Feb 28 03:57:43 crc kubenswrapper[4624]: E0228 03:57:43.121602 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46fc9ec-81fe-446c-8592-9fcb0802aeb0" containerName="nova-manage" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.121611 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46fc9ec-81fe-446c-8592-9fcb0802aeb0" containerName="nova-manage" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.121803 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46fc9ec-81fe-446c-8592-9fcb0802aeb0" containerName="nova-manage" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.121831 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed3527b-0935-4801-b27d-df8653a25097" containerName="nova-scheduler-scheduler" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.122600 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.130930 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.138124 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.180749 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhlb\" (UniqueName: \"kubernetes.io/projected/e5689b27-efcc-4238-ab31-f85edaa239d6-kube-api-access-mkhlb\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.180801 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.180879 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-config-data\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.283149 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkhlb\" (UniqueName: \"kubernetes.io/projected/e5689b27-efcc-4238-ab31-f85edaa239d6-kube-api-access-mkhlb\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.283202 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.283277 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-config-data\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.291289 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-config-data\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.301111 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.318715 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkhlb\" (UniqueName: \"kubernetes.io/projected/e5689b27-efcc-4238-ab31-f85edaa239d6-kube-api-access-mkhlb\") pod \"nova-scheduler-0\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.446474 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:57:43 crc kubenswrapper[4624]: I0228 03:57:43.617828 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.109783 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed3527b-0935-4801-b27d-df8653a25097" path="/var/lib/kubelet/pods/1ed3527b-0935-4801-b27d-df8653a25097/volumes" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.147156 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.155428 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.155558 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.156550 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"93a0882fc2c46a1d7ae2f33f5d46e75723aa417d1b35f6b58392edea81f40dce"} pod="openstack/horizon-5b4bc59cd8-fkd4p" containerMessage="Container horizon failed startup probe, will be restarted" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.156586 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" containerID="cri-o://93a0882fc2c46a1d7ae2f33f5d46e75723aa417d1b35f6b58392edea81f40dce" gracePeriod=30 Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.280225 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.323280 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.323371 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.324719 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"044c0669362491674e475d7359b88b571d3a8160c975f83c5501cc2c9a104e26"} pod="openstack/horizon-6cc988c5cd-svksm" containerMessage="Container horizon failed startup probe, will be restarted" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.324759 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" containerID="cri-o://044c0669362491674e475d7359b88b571d3a8160c975f83c5501cc2c9a104e26" gracePeriod=30 Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.415033 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-combined-ca-bundle\") pod \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.415168 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-scripts\") pod \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.415312 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlb6d\" (UniqueName: \"kubernetes.io/projected/dedb25a9-d021-48ae-82e9-cbf0dcba172f-kube-api-access-hlb6d\") pod \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.415573 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-config-data\") pod \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\" (UID: \"dedb25a9-d021-48ae-82e9-cbf0dcba172f\") " Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.437353 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-scripts" (OuterVolumeSpecName: "scripts") pod "dedb25a9-d021-48ae-82e9-cbf0dcba172f" (UID: "dedb25a9-d021-48ae-82e9-cbf0dcba172f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.439516 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedb25a9-d021-48ae-82e9-cbf0dcba172f-kube-api-access-hlb6d" (OuterVolumeSpecName: "kube-api-access-hlb6d") pod "dedb25a9-d021-48ae-82e9-cbf0dcba172f" (UID: "dedb25a9-d021-48ae-82e9-cbf0dcba172f"). InnerVolumeSpecName "kube-api-access-hlb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.489653 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-config-data" (OuterVolumeSpecName: "config-data") pod "dedb25a9-d021-48ae-82e9-cbf0dcba172f" (UID: "dedb25a9-d021-48ae-82e9-cbf0dcba172f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.518413 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dedb25a9-d021-48ae-82e9-cbf0dcba172f" (UID: "dedb25a9-d021-48ae-82e9-cbf0dcba172f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.521892 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.521922 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.521932 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dedb25a9-d021-48ae-82e9-cbf0dcba172f-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.521942 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlb6d\" (UniqueName: \"kubernetes.io/projected/dedb25a9-d021-48ae-82e9-cbf0dcba172f-kube-api-access-hlb6d\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.751993 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5689b27-efcc-4238-ab31-f85edaa239d6","Type":"ContainerStarted","Data":"5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2"} Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.752600 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5689b27-efcc-4238-ab31-f85edaa239d6","Type":"ContainerStarted","Data":"b46e127717358a610eb9982f150d72b489c019a3d9e7d9849e2a88fd915f8af2"} Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.754291 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.754211 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxv6f" event={"ID":"dedb25a9-d021-48ae-82e9-cbf0dcba172f","Type":"ContainerDied","Data":"fb548b618f79b2ce6c34b67d07fb7823a5d908ee5c627b52c5179fe2d253cdc8"} Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.767249 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb548b618f79b2ce6c34b67d07fb7823a5d908ee5c627b52c5179fe2d253cdc8" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.789463 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.7894364889999999 podStartE2EDuration="1.789436489s" podCreationTimestamp="2026-02-28 03:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:44.780444299 +0000 UTC m=+1319.444483608" watchObservedRunningTime="2026-02-28 03:57:44.789436489 +0000 UTC m=+1319.453475798" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.865832 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 03:57:44 crc kubenswrapper[4624]: E0228 03:57:44.866285 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedb25a9-d021-48ae-82e9-cbf0dcba172f" containerName="nova-cell1-conductor-db-sync" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.866302 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedb25a9-d021-48ae-82e9-cbf0dcba172f" containerName="nova-cell1-conductor-db-sync" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.866512 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedb25a9-d021-48ae-82e9-cbf0dcba172f" containerName="nova-cell1-conductor-db-sync" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.867157 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.869226 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.888674 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.931287 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.931351 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqsd\" (UniqueName: \"kubernetes.io/projected/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-kube-api-access-mnqsd\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:44 crc kubenswrapper[4624]: I0228 03:57:44.931404 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.034406 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.034471 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqsd\" (UniqueName: \"kubernetes.io/projected/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-kube-api-access-mnqsd\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.034506 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.046748 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.059186 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.092412 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqsd\" (UniqueName: \"kubernetes.io/projected/cff7ee8c-629b-43aa-a39b-1b2282c58d2b-kube-api-access-mnqsd\") pod \"nova-cell1-conductor-0\" (UID: \"cff7ee8c-629b-43aa-a39b-1b2282c58d2b\") " pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.183972 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.816121 4624 generic.go:334] "Generic (PLEG): container finished" podID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerID="ab713829831aac4183fb7fc2b06a85ac6ffd9aa97e5479a6eda3d6e620f83e18" exitCode=0 Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.817012 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc271f84-9b09-4ac0-a69f-7bff0adb7498","Type":"ContainerDied","Data":"ab713829831aac4183fb7fc2b06a85ac6ffd9aa97e5479a6eda3d6e620f83e18"} Feb 28 03:57:45 crc kubenswrapper[4624]: I0228 03:57:45.911360 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.193545 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.297000 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.299181 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-central-agent" containerID="cri-o://23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926" gracePeriod=30 Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.299762 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="proxy-httpd" containerID="cri-o://c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0" gracePeriod=30 Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.299820 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="sg-core" containerID="cri-o://52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d" gracePeriod=30 Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.299886 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-notification-agent" containerID="cri-o://8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131" gracePeriod=30 Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.394155 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcwp\" (UniqueName: \"kubernetes.io/projected/dc271f84-9b09-4ac0-a69f-7bff0adb7498-kube-api-access-srcwp\") pod \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.394297 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-config-data\") pod \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.394376 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc271f84-9b09-4ac0-a69f-7bff0adb7498-logs\") pod \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.394428 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-combined-ca-bundle\") pod \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\" (UID: \"dc271f84-9b09-4ac0-a69f-7bff0adb7498\") " Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.396414 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc271f84-9b09-4ac0-a69f-7bff0adb7498-logs" (OuterVolumeSpecName: "logs") pod "dc271f84-9b09-4ac0-a69f-7bff0adb7498" (UID: "dc271f84-9b09-4ac0-a69f-7bff0adb7498"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.496565 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc271f84-9b09-4ac0-a69f-7bff0adb7498-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.661467 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc271f84-9b09-4ac0-a69f-7bff0adb7498-kube-api-access-srcwp" (OuterVolumeSpecName: "kube-api-access-srcwp") pod "dc271f84-9b09-4ac0-a69f-7bff0adb7498" (UID: "dc271f84-9b09-4ac0-a69f-7bff0adb7498"). InnerVolumeSpecName "kube-api-access-srcwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.701762 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcwp\" (UniqueName: \"kubernetes.io/projected/dc271f84-9b09-4ac0-a69f-7bff0adb7498-kube-api-access-srcwp\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.835433 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc271f84-9b09-4ac0-a69f-7bff0adb7498","Type":"ContainerDied","Data":"b4efe367e9702d1637bdb1ac4a4364745b12ebf9f91569c7e8f64e685f631b8b"} Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.835511 4624 scope.go:117] "RemoveContainer" containerID="ab713829831aac4183fb7fc2b06a85ac6ffd9aa97e5479a6eda3d6e620f83e18" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.835723 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.840321 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cff7ee8c-629b-43aa-a39b-1b2282c58d2b","Type":"ContainerStarted","Data":"ab76a1a3a03c89a2bbece3f73159736a8328784315f4298efd3e2a66266994ca"} Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.938222 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-config-data" (OuterVolumeSpecName: "config-data") pod "dc271f84-9b09-4ac0-a69f-7bff0adb7498" (UID: "dc271f84-9b09-4ac0-a69f-7bff0adb7498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:46 crc kubenswrapper[4624]: I0228 03:57:46.939259 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc271f84-9b09-4ac0-a69f-7bff0adb7498" (UID: "dc271f84-9b09-4ac0-a69f-7bff0adb7498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.009603 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.009641 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc271f84-9b09-4ac0-a69f-7bff0adb7498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.061975 4624 scope.go:117] "RemoveContainer" containerID="4098ae793bcd8745d2a1c6f7d3580989de0a7b4163ad49fdcf2ef9bdea8c6216" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.207534 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.239858 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.257278 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:47 crc kubenswrapper[4624]: E0228 03:57:47.257906 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-api" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.257926 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-api" Feb 28 03:57:47 crc kubenswrapper[4624]: E0228 03:57:47.257952 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-log" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.257960 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-log" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.258210 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-log" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.258245 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" containerName="nova-api-api" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.259564 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.265358 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.265559 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.265701 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.270626 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.440147 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19a482f-ce28-4c03-8976-6fcf560499aa-logs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.440276 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-config-data\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.440340 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.440358 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.440429 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcfgg\" (UniqueName: \"kubernetes.io/projected/f19a482f-ce28-4c03-8976-6fcf560499aa-kube-api-access-xcfgg\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.440611 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.542436 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.542509 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19a482f-ce28-4c03-8976-6fcf560499aa-logs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.542556 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-config-data\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.542599 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.542627 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.542663 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcfgg\" (UniqueName: \"kubernetes.io/projected/f19a482f-ce28-4c03-8976-6fcf560499aa-kube-api-access-xcfgg\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.543523 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19a482f-ce28-4c03-8976-6fcf560499aa-logs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.551071 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.551222 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-config-data\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.552309 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.559604 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.563350 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcfgg\" (UniqueName: \"kubernetes.io/projected/f19a482f-ce28-4c03-8976-6fcf560499aa-kube-api-access-xcfgg\") pod \"nova-api-0\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.585057 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.917396 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cff7ee8c-629b-43aa-a39b-1b2282c58d2b","Type":"ContainerStarted","Data":"efdadf65137dcf1dbae03e641f5e715a3f88fabfa3c39a542ed9b3d98bfa1b78"} Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.919890 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.969667 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.969649931 podStartE2EDuration="3.969649931s" podCreationTimestamp="2026-02-28 03:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:47.966283161 +0000 UTC m=+1322.630322470" watchObservedRunningTime="2026-02-28 03:57:47.969649931 +0000 UTC m=+1322.633689240" Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.993766 4624 generic.go:334] "Generic (PLEG): container finished" podID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerID="c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0" exitCode=0 Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.993820 4624 generic.go:334] "Generic (PLEG): container finished" podID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerID="52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d" exitCode=2 Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.993853 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerDied","Data":"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0"} Feb 28 03:57:47 crc kubenswrapper[4624]: I0228 03:57:47.993895 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerDied","Data":"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d"} Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.104392 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc271f84-9b09-4ac0-a69f-7bff0adb7498" path="/var/lib/kubelet/pods/dc271f84-9b09-4ac0-a69f-7bff0adb7498/volumes" Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.277725 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.447642 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.901473 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.983510 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-sg-core-conf-yaml\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.984039 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-config-data\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.984120 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcd7j\" (UniqueName: \"kubernetes.io/projected/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-kube-api-access-hcd7j\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.984155 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-log-httpd\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.984229 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-combined-ca-bundle\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.984315 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-scripts\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.984364 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-run-httpd\") pod \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\" (UID: \"3ef2ffb7-20c6-43d9-95be-5812a67f5c88\") " Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.985956 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.989522 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:48 crc kubenswrapper[4624]: I0228 03:57:48.989531 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-kube-api-access-hcd7j" (OuterVolumeSpecName: "kube-api-access-hcd7j") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "kube-api-access-hcd7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.000101 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-scripts" (OuterVolumeSpecName: "scripts") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.011191 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f19a482f-ce28-4c03-8976-6fcf560499aa","Type":"ContainerStarted","Data":"28fdeb9ea94d80b6fb301f94b399ffea3577fa9edca515ab69fbe5fa9d1de980"} Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.011244 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f19a482f-ce28-4c03-8976-6fcf560499aa","Type":"ContainerStarted","Data":"ae61b7765f68d341ef0a06b48d9a8c1f3fe0f5100871ac010eb5f9ed59fea30a"} Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.028609 4624 generic.go:334] "Generic (PLEG): container finished" podID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerID="8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131" exitCode=0 Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.028648 4624 generic.go:334] "Generic (PLEG): container finished" podID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerID="23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926" exitCode=0 Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.029813 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.030443 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerDied","Data":"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131"} Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.030475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerDied","Data":"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926"} Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.030489 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ef2ffb7-20c6-43d9-95be-5812a67f5c88","Type":"ContainerDied","Data":"6f25392dc71b29e583972bc87e99fa3484755f663552e2b04353dc74b5f94ab7"} Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.030508 4624 scope.go:117] "RemoveContainer" containerID="c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.082273 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.086375 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.086406 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.086426 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcd7j\" (UniqueName: \"kubernetes.io/projected/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-kube-api-access-hcd7j\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.086438 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.086446 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.136426 4624 scope.go:117] "RemoveContainer" containerID="52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.143847 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.170550 4624 scope.go:117] "RemoveContainer" containerID="8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.190039 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.196256 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-config-data" (OuterVolumeSpecName: "config-data") pod "3ef2ffb7-20c6-43d9-95be-5812a67f5c88" (UID: "3ef2ffb7-20c6-43d9-95be-5812a67f5c88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.206283 4624 scope.go:117] "RemoveContainer" containerID="23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.213323 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.243280 4624 scope.go:117] "RemoveContainer" containerID="c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.243824 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0\": container with ID starting with c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0 not found: ID does not exist" containerID="c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.243862 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0"} err="failed to get container status \"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0\": rpc error: code = NotFound desc = could not find container \"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0\": container with ID starting with c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0 not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.243889 4624 scope.go:117] "RemoveContainer" containerID="52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.244365 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d\": container with ID starting with 52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d not found: ID does not exist" containerID="52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.244390 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d"} err="failed to get container status \"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d\": rpc error: code = NotFound desc = could not find container \"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d\": container with ID starting with 52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.244404 4624 scope.go:117] "RemoveContainer" containerID="8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.244644 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131\": container with ID starting with 8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131 not found: ID does not exist" containerID="8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.244668 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131"} err="failed to get container status \"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131\": rpc error: code = NotFound desc = could not find container \"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131\": container with ID starting with 8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131 not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.244681 4624 scope.go:117] "RemoveContainer" containerID="23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.244877 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926\": container with ID starting with 23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926 not found: ID does not exist" containerID="23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.244898 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926"} err="failed to get container status \"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926\": rpc error: code = NotFound desc = could not find container \"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926\": container with ID starting with 23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926 not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.244912 4624 scope.go:117] "RemoveContainer" containerID="c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.245094 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0"} err="failed to get container status \"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0\": rpc error: code = NotFound desc = could not find container \"c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0\": container with ID starting with c89867c0cc3136f28bc048a8e14f451168eb12e0a363ca7f1db2489ce81d48a0 not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.245114 4624 scope.go:117] "RemoveContainer" containerID="52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.245304 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d"} err="failed to get container status \"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d\": rpc error: code = NotFound desc = could not find container \"52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d\": container with ID starting with 52fed02e93e16d77b60057a6e5bbf63439c26938a0cd5a3fa8dc29b64962fa8d not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.245323 4624 scope.go:117] "RemoveContainer" containerID="8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.248230 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131"} err="failed to get container status \"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131\": rpc error: code = NotFound desc = could not find container \"8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131\": container with ID starting with 8ffab12336374e0f9c1ca1af882f2a2ca309611529d8e1b7f528d2480615f131 not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.248297 4624 scope.go:117] "RemoveContainer" containerID="23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.248754 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926"} err="failed to get container status \"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926\": rpc error: code = NotFound desc = could not find container \"23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926\": container with ID starting with 23fbc8435cb6a33f372d20b96f48ac1a55fd611c4986b759f2afc4af0550a926 not found: ID does not exist" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.293190 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef2ffb7-20c6-43d9-95be-5812a67f5c88-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.313514 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-vtffs"] Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.313810 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerName="dnsmasq-dns" containerID="cri-o://450edef818c18497528cefc4d66f255e2cbf730bbb6fcf4d8433e6706ab4fb4f" gracePeriod=10 Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.481306 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.514930 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.529734 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.530356 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="sg-core" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530373 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="sg-core" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.530388 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="proxy-httpd" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530394 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="proxy-httpd" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.530416 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-notification-agent" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530424 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-notification-agent" Feb 28 03:57:49 crc kubenswrapper[4624]: E0228 03:57:49.530448 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-central-agent" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530458 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-central-agent" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530687 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-notification-agent" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530700 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="sg-core" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530713 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="ceilometer-central-agent" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.530723 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" containerName="proxy-httpd" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.532769 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.539712 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.540253 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.555596 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712017 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-config-data\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712102 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-log-httpd\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712155 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxbx\" (UniqueName: \"kubernetes.io/projected/85e63197-e526-4c05-9dfa-b65b4aac331f-kube-api-access-dxxbx\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712278 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-scripts\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712345 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-run-httpd\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.712384 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823268 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-log-httpd\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823339 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxxbx\" (UniqueName: \"kubernetes.io/projected/85e63197-e526-4c05-9dfa-b65b4aac331f-kube-api-access-dxxbx\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823390 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823430 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-scripts\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823485 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-run-httpd\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823521 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823566 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-config-data\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.823969 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-log-httpd\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.825436 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-run-httpd\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.833043 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-scripts\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.841432 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.841947 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.842944 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-config-data\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:49 crc kubenswrapper[4624]: I0228 03:57:49.902123 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxxbx\" (UniqueName: \"kubernetes.io/projected/85e63197-e526-4c05-9dfa-b65b4aac331f-kube-api-access-dxxbx\") pod \"ceilometer-0\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " pod="openstack/ceilometer-0" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.168840 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.170013 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef2ffb7-20c6-43d9-95be-5812a67f5c88" path="/var/lib/kubelet/pods/3ef2ffb7-20c6-43d9-95be-5812a67f5c88/volumes" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.170897 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.171041 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f19a482f-ce28-4c03-8976-6fcf560499aa","Type":"ContainerStarted","Data":"4bc29c59a71404544a0cb3a7fc7a8eff69997d8970dd25cd885e43b65c9c9da0"} Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.237850 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-config\") pod \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.237985 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-sb\") pod \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.238044 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-nb\") pod \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.238078 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9kv\" (UniqueName: \"kubernetes.io/projected/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-kube-api-access-4n9kv\") pod \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.238203 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-swift-storage-0\") pod \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.238362 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-svc\") pod \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\" (UID: \"3ac07dd6-4fe7-4a79-a083-cf6b01a50026\") " Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.243651 4624 generic.go:334] "Generic (PLEG): container finished" podID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerID="450edef818c18497528cefc4d66f255e2cbf730bbb6fcf4d8433e6706ab4fb4f" exitCode=0 Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.245237 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" event={"ID":"3ac07dd6-4fe7-4a79-a083-cf6b01a50026","Type":"ContainerDied","Data":"450edef818c18497528cefc4d66f255e2cbf730bbb6fcf4d8433e6706ab4fb4f"} Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.245287 4624 scope.go:117] "RemoveContainer" containerID="450edef818c18497528cefc4d66f255e2cbf730bbb6fcf4d8433e6706ab4fb4f" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.277166 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.27713303 podStartE2EDuration="3.27713303s" podCreationTimestamp="2026-02-28 03:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:50.22980992 +0000 UTC m=+1324.893849229" watchObservedRunningTime="2026-02-28 03:57:50.27713303 +0000 UTC m=+1324.941172339" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.300184 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-kube-api-access-4n9kv" (OuterVolumeSpecName: "kube-api-access-4n9kv") pod "3ac07dd6-4fe7-4a79-a083-cf6b01a50026" (UID: "3ac07dd6-4fe7-4a79-a083-cf6b01a50026"). InnerVolumeSpecName "kube-api-access-4n9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.345748 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9kv\" (UniqueName: \"kubernetes.io/projected/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-kube-api-access-4n9kv\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.400402 4624 scope.go:117] "RemoveContainer" containerID="c9fb9292421e6f8aef3835033372f0291bebcc607d6d95fdc584fbe219fc65bd" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.424961 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ac07dd6-4fe7-4a79-a083-cf6b01a50026" (UID: "3ac07dd6-4fe7-4a79-a083-cf6b01a50026"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.447904 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.485962 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ac07dd6-4fe7-4a79-a083-cf6b01a50026" (UID: "3ac07dd6-4fe7-4a79-a083-cf6b01a50026"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.504636 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ac07dd6-4fe7-4a79-a083-cf6b01a50026" (UID: "3ac07dd6-4fe7-4a79-a083-cf6b01a50026"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.552038 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.552094 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.555494 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-config" (OuterVolumeSpecName: "config") pod "3ac07dd6-4fe7-4a79-a083-cf6b01a50026" (UID: "3ac07dd6-4fe7-4a79-a083-cf6b01a50026"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.609384 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ac07dd6-4fe7-4a79-a083-cf6b01a50026" (UID: "3ac07dd6-4fe7-4a79-a083-cf6b01a50026"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.653782 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:50 crc kubenswrapper[4624]: I0228 03:57:50.653817 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac07dd6-4fe7-4a79-a083-cf6b01a50026-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:51 crc kubenswrapper[4624]: I0228 03:57:51.209016 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:51 crc kubenswrapper[4624]: W0228 03:57:51.221205 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85e63197_e526_4c05_9dfa_b65b4aac331f.slice/crio-80a733fe92b51a5ad4f531db8e90a9be4785431885f23216ce16f9833534d861 WatchSource:0}: Error finding container 80a733fe92b51a5ad4f531db8e90a9be4785431885f23216ce16f9833534d861: Status 404 returned error can't find the container with id 80a733fe92b51a5ad4f531db8e90a9be4785431885f23216ce16f9833534d861 Feb 28 03:57:51 crc kubenswrapper[4624]: I0228 03:57:51.257863 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerStarted","Data":"80a733fe92b51a5ad4f531db8e90a9be4785431885f23216ce16f9833534d861"} Feb 28 03:57:51 crc kubenswrapper[4624]: I0228 03:57:51.259137 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" event={"ID":"3ac07dd6-4fe7-4a79-a083-cf6b01a50026","Type":"ContainerDied","Data":"c62c328f64a6799746c7dfe50e59fc2826a354786b2c777dedeeb9aaf5e395a7"} Feb 28 03:57:51 crc kubenswrapper[4624]: I0228 03:57:51.259168 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-vtffs" Feb 28 03:57:51 crc kubenswrapper[4624]: I0228 03:57:51.303166 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-vtffs"] Feb 28 03:57:51 crc kubenswrapper[4624]: I0228 03:57:51.318995 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-vtffs"] Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.055048 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.118619 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-combined-ca-bundle\") pod \"7743fde0-948b-45e0-9c8c-b90ec50005e7\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.118732 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45bmm\" (UniqueName: \"kubernetes.io/projected/7743fde0-948b-45e0-9c8c-b90ec50005e7-kube-api-access-45bmm\") pod \"7743fde0-948b-45e0-9c8c-b90ec50005e7\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.118770 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-nova-metadata-tls-certs\") pod \"7743fde0-948b-45e0-9c8c-b90ec50005e7\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.118836 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7743fde0-948b-45e0-9c8c-b90ec50005e7-logs\") pod \"7743fde0-948b-45e0-9c8c-b90ec50005e7\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.118957 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-config-data\") pod \"7743fde0-948b-45e0-9c8c-b90ec50005e7\" (UID: \"7743fde0-948b-45e0-9c8c-b90ec50005e7\") " Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.128834 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7743fde0-948b-45e0-9c8c-b90ec50005e7-logs" (OuterVolumeSpecName: "logs") pod "7743fde0-948b-45e0-9c8c-b90ec50005e7" (UID: "7743fde0-948b-45e0-9c8c-b90ec50005e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.135549 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7743fde0-948b-45e0-9c8c-b90ec50005e7-kube-api-access-45bmm" (OuterVolumeSpecName: "kube-api-access-45bmm") pod "7743fde0-948b-45e0-9c8c-b90ec50005e7" (UID: "7743fde0-948b-45e0-9c8c-b90ec50005e7"). InnerVolumeSpecName "kube-api-access-45bmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.156021 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" path="/var/lib/kubelet/pods/3ac07dd6-4fe7-4a79-a083-cf6b01a50026/volumes" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.181225 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-config-data" (OuterVolumeSpecName: "config-data") pod "7743fde0-948b-45e0-9c8c-b90ec50005e7" (UID: "7743fde0-948b-45e0-9c8c-b90ec50005e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.196090 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7743fde0-948b-45e0-9c8c-b90ec50005e7" (UID: "7743fde0-948b-45e0-9c8c-b90ec50005e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.220723 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.220855 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45bmm\" (UniqueName: \"kubernetes.io/projected/7743fde0-948b-45e0-9c8c-b90ec50005e7-kube-api-access-45bmm\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.220912 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7743fde0-948b-45e0-9c8c-b90ec50005e7-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.220963 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.280341 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7743fde0-948b-45e0-9c8c-b90ec50005e7" (UID: "7743fde0-948b-45e0-9c8c-b90ec50005e7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.280356 4624 generic.go:334] "Generic (PLEG): container finished" podID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerID="c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015" exitCode=0 Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.280388 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7743fde0-948b-45e0-9c8c-b90ec50005e7","Type":"ContainerDied","Data":"c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015"} Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.280424 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7743fde0-948b-45e0-9c8c-b90ec50005e7","Type":"ContainerDied","Data":"d29ec654b58dcf2742510676b0f5a6cbad1089c8ca43f1dba050e37585fff00f"} Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.280442 4624 scope.go:117] "RemoveContainer" containerID="c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.280478 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.323414 4624 scope.go:117] "RemoveContainer" containerID="211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.325832 4624 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7743fde0-948b-45e0-9c8c-b90ec50005e7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.348134 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.376234 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.412272 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:52 crc kubenswrapper[4624]: E0228 03:57:52.413419 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerName="init" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413445 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerName="init" Feb 28 03:57:52 crc kubenswrapper[4624]: E0228 03:57:52.413460 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-metadata" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413468 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-metadata" Feb 28 03:57:52 crc kubenswrapper[4624]: E0228 03:57:52.413493 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-log" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413502 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-log" Feb 28 03:57:52 crc kubenswrapper[4624]: E0228 03:57:52.413522 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerName="dnsmasq-dns" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413531 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerName="dnsmasq-dns" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413764 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac07dd6-4fe7-4a79-a083-cf6b01a50026" containerName="dnsmasq-dns" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413779 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-log" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.413799 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" containerName="nova-metadata-metadata" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.416411 4624 scope.go:117] "RemoveContainer" containerID="c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015" Feb 28 03:57:52 crc kubenswrapper[4624]: E0228 03:57:52.417024 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015\": container with ID starting with c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015 not found: ID does not exist" containerID="c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.417071 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015"} err="failed to get container status \"c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015\": rpc error: code = NotFound desc = could not find container \"c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015\": container with ID starting with c0b6f8337e595e5d366cf49a4ebc5fce37682b9bd61d90b086d57854cc82a015 not found: ID does not exist" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.417170 4624 scope.go:117] "RemoveContainer" containerID="211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8" Feb 28 03:57:52 crc kubenswrapper[4624]: E0228 03:57:52.420373 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8\": container with ID starting with 211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8 not found: ID does not exist" containerID="211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.420415 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8"} err="failed to get container status \"211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8\": rpc error: code = NotFound desc = could not find container \"211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8\": container with ID starting with 211e734492a69a37841f20198d14b1012b2dd4d2bd248ac46aff4fd28addfbf8 not found: ID does not exist" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.430824 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.440337 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.440601 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.486968 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.541649 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdfvk\" (UniqueName: \"kubernetes.io/projected/edbc8a68-6254-4b10-8baa-ef87dbc48031-kube-api-access-wdfvk\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.541716 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-config-data\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.541810 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.542160 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.542570 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc8a68-6254-4b10-8baa-ef87dbc48031-logs\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.644906 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdfvk\" (UniqueName: \"kubernetes.io/projected/edbc8a68-6254-4b10-8baa-ef87dbc48031-kube-api-access-wdfvk\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.644983 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-config-data\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.645001 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.645040 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.645087 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc8a68-6254-4b10-8baa-ef87dbc48031-logs\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.645741 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc8a68-6254-4b10-8baa-ef87dbc48031-logs\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.656996 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.662004 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-config-data\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.663288 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.680957 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdfvk\" (UniqueName: \"kubernetes.io/projected/edbc8a68-6254-4b10-8baa-ef87dbc48031-kube-api-access-wdfvk\") pod \"nova-metadata-0\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " pod="openstack/nova-metadata-0" Feb 28 03:57:52 crc kubenswrapper[4624]: I0228 03:57:52.840985 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.294312 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerStarted","Data":"2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df"} Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.395083 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.434462 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.434740 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" containerName="kube-state-metrics" containerID="cri-o://7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3" gracePeriod=30 Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.447551 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.503835 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 03:57:53 crc kubenswrapper[4624]: I0228 03:57:53.622817 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.120404 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.124201 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7743fde0-948b-45e0-9c8c-b90ec50005e7" path="/var/lib/kubelet/pods/7743fde0-948b-45e0-9c8c-b90ec50005e7/volumes" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.246931 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfg97\" (UniqueName: \"kubernetes.io/projected/eaa63baf-d297-4867-b87d-2c49da381d42-kube-api-access-jfg97\") pod \"eaa63baf-d297-4867-b87d-2c49da381d42\" (UID: \"eaa63baf-d297-4867-b87d-2c49da381d42\") " Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.271513 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa63baf-d297-4867-b87d-2c49da381d42-kube-api-access-jfg97" (OuterVolumeSpecName: "kube-api-access-jfg97") pod "eaa63baf-d297-4867-b87d-2c49da381d42" (UID: "eaa63baf-d297-4867-b87d-2c49da381d42"). InnerVolumeSpecName "kube-api-access-jfg97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.338758 4624 generic.go:334] "Generic (PLEG): container finished" podID="eaa63baf-d297-4867-b87d-2c49da381d42" containerID="7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3" exitCode=2 Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.338845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eaa63baf-d297-4867-b87d-2c49da381d42","Type":"ContainerDied","Data":"7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3"} Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.338883 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eaa63baf-d297-4867-b87d-2c49da381d42","Type":"ContainerDied","Data":"4bf706203edc7806fcd4bb9fc74e0f89c5711644b0364a8de0e917997a5b7fd3"} Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.338904 4624 scope.go:117] "RemoveContainer" containerID="7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.339066 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.350784 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfg97\" (UniqueName: \"kubernetes.io/projected/eaa63baf-d297-4867-b87d-2c49da381d42-kube-api-access-jfg97\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.370325 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerStarted","Data":"290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d"} Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.424272 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edbc8a68-6254-4b10-8baa-ef87dbc48031","Type":"ContainerStarted","Data":"5ce7dbbc50b71e1372f5d8f8ba70f2dfea7cb2467b246cca7711e0ae02ba1416"} Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.425991 4624 scope.go:117] "RemoveContainer" containerID="7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3" Feb 28 03:57:54 crc kubenswrapper[4624]: E0228 03:57:54.429211 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3\": container with ID starting with 7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3 not found: ID does not exist" containerID="7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.429249 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3"} err="failed to get container status \"7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3\": rpc error: code = NotFound desc = could not find container \"7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3\": container with ID starting with 7a14b05d5773e37880176a6e334af2d0333a86378c74174ab0ce59929857afa3 not found: ID does not exist" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.494323 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.572139 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.598074 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:57:54 crc kubenswrapper[4624]: E0228 03:57:54.598772 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" containerName="kube-state-metrics" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.598793 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" containerName="kube-state-metrics" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.599037 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" containerName="kube-state-metrics" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.600139 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.608718 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.610074 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.614261 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.664273 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.664643 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.664671 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dfr\" (UniqueName: \"kubernetes.io/projected/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-api-access-t9dfr\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.664806 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.766758 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.766911 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.766957 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.766990 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9dfr\" (UniqueName: \"kubernetes.io/projected/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-api-access-t9dfr\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.858437 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.863658 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.863882 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d209e2-524d-40ba-b092-14b4f73dfb71-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.887826 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9dfr\" (UniqueName: \"kubernetes.io/projected/60d209e2-524d-40ba-b092-14b4f73dfb71-kube-api-access-t9dfr\") pod \"kube-state-metrics-0\" (UID: \"60d209e2-524d-40ba-b092-14b4f73dfb71\") " pod="openstack/kube-state-metrics-0" Feb 28 03:57:54 crc kubenswrapper[4624]: I0228 03:57:54.937806 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.231928 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.299481 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.478348 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerStarted","Data":"1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51"} Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.500921 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edbc8a68-6254-4b10-8baa-ef87dbc48031","Type":"ContainerStarted","Data":"33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5"} Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.501652 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edbc8a68-6254-4b10-8baa-ef87dbc48031","Type":"ContainerStarted","Data":"377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3"} Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.552145 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.552114848 podStartE2EDuration="3.552114848s" podCreationTimestamp="2026-02-28 03:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:57:55.540979771 +0000 UTC m=+1330.205019070" watchObservedRunningTime="2026-02-28 03:57:55.552114848 +0000 UTC m=+1330.216154147" Feb 28 03:57:55 crc kubenswrapper[4624]: I0228 03:57:55.922012 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 03:57:56 crc kubenswrapper[4624]: I0228 03:57:56.105231 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa63baf-d297-4867-b87d-2c49da381d42" path="/var/lib/kubelet/pods/eaa63baf-d297-4867-b87d-2c49da381d42/volumes" Feb 28 03:57:56 crc kubenswrapper[4624]: I0228 03:57:56.565522 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60d209e2-524d-40ba-b092-14b4f73dfb71","Type":"ContainerStarted","Data":"413438c7c655a2b897a86cfe4f875da216b4eb77ea15a8c68ed0c4d4b7478c56"} Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.581629 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60d209e2-524d-40ba-b092-14b4f73dfb71","Type":"ContainerStarted","Data":"fb58230993ad1de9db5fc3d38ddb728011f84aa7e1305cfcb79cd972e7240001"} Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.583860 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.591756 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.591797 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.598313 4624 generic.go:334] "Generic (PLEG): container finished" podID="3a2f3514-5076-427b-b99d-92ac7a0a0fb3" containerID="b5d12408f2083d87a38e8f1779625ed668347139bf6bb90ed1b6bc032fd4dbff" exitCode=137 Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.598358 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a2f3514-5076-427b-b99d-92ac7a0a0fb3","Type":"ContainerDied","Data":"b5d12408f2083d87a38e8f1779625ed668347139bf6bb90ed1b6bc032fd4dbff"} Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.616922 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.2646617940000002 podStartE2EDuration="3.616899784s" podCreationTimestamp="2026-02-28 03:57:54 +0000 UTC" firstStartedPulling="2026-02-28 03:57:55.916487371 +0000 UTC m=+1330.580526680" lastFinishedPulling="2026-02-28 03:57:56.268725361 +0000 UTC m=+1330.932764670" observedRunningTime="2026-02-28 03:57:57.611277274 +0000 UTC m=+1332.275316583" watchObservedRunningTime="2026-02-28 03:57:57.616899784 +0000 UTC m=+1332.280939093" Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.843546 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:57:57 crc kubenswrapper[4624]: I0228 03:57:57.843599 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.021343 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.081994 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcjzs\" (UniqueName: \"kubernetes.io/projected/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-kube-api-access-dcjzs\") pod \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.082313 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-config-data\") pod \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.082723 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-combined-ca-bundle\") pod \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\" (UID: \"3a2f3514-5076-427b-b99d-92ac7a0a0fb3\") " Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.103694 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-kube-api-access-dcjzs" (OuterVolumeSpecName: "kube-api-access-dcjzs") pod "3a2f3514-5076-427b-b99d-92ac7a0a0fb3" (UID: "3a2f3514-5076-427b-b99d-92ac7a0a0fb3"). InnerVolumeSpecName "kube-api-access-dcjzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.174603 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a2f3514-5076-427b-b99d-92ac7a0a0fb3" (UID: "3a2f3514-5076-427b-b99d-92ac7a0a0fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.174803 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-config-data" (OuterVolumeSpecName: "config-data") pod "3a2f3514-5076-427b-b99d-92ac7a0a0fb3" (UID: "3a2f3514-5076-427b-b99d-92ac7a0a0fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.206994 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.207339 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcjzs\" (UniqueName: \"kubernetes.io/projected/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-kube-api-access-dcjzs\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.207370 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2f3514-5076-427b-b99d-92ac7a0a0fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.603287 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.603325 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.611685 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerStarted","Data":"3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d"} Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.612422 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="proxy-httpd" containerID="cri-o://3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d" gracePeriod=30 Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.612472 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="sg-core" containerID="cri-o://1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51" gracePeriod=30 Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.612584 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-central-agent" containerID="cri-o://2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df" gracePeriod=30 Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.612651 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-notification-agent" containerID="cri-o://290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d" gracePeriod=30 Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.613764 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.615352 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.616726 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a2f3514-5076-427b-b99d-92ac7a0a0fb3","Type":"ContainerDied","Data":"0477e860f2fde82ba414607bd8586883a3e202d49ac22f2a1024eece80f222d9"} Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.616776 4624 scope.go:117] "RemoveContainer" containerID="b5d12408f2083d87a38e8f1779625ed668347139bf6bb90ed1b6bc032fd4dbff" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.661522 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.094464247 podStartE2EDuration="9.661501952s" podCreationTimestamp="2026-02-28 03:57:49 +0000 UTC" firstStartedPulling="2026-02-28 03:57:51.226580905 +0000 UTC m=+1325.890620204" lastFinishedPulling="2026-02-28 03:57:57.7936186 +0000 UTC m=+1332.457657909" observedRunningTime="2026-02-28 03:57:58.65277584 +0000 UTC m=+1333.316815149" watchObservedRunningTime="2026-02-28 03:57:58.661501952 +0000 UTC m=+1333.325541261" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.697991 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.708115 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.759797 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:58 crc kubenswrapper[4624]: E0228 03:57:58.760596 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2f3514-5076-427b-b99d-92ac7a0a0fb3" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.760622 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2f3514-5076-427b-b99d-92ac7a0a0fb3" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.768380 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2f3514-5076-427b-b99d-92ac7a0a0fb3" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.769126 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.769241 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.780813 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.783739 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.790708 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.931629 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.931724 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fckx\" (UniqueName: \"kubernetes.io/projected/4b31404c-f19e-465d-9acb-3a314299ad57-kube-api-access-6fckx\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.931815 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.931876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:58 crc kubenswrapper[4624]: I0228 03:57:58.932220 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.035059 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.035179 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fckx\" (UniqueName: \"kubernetes.io/projected/4b31404c-f19e-465d-9acb-3a314299ad57-kube-api-access-6fckx\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.035253 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.035287 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.035350 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.047697 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.054704 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.055128 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.056013 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fckx\" (UniqueName: \"kubernetes.io/projected/4b31404c-f19e-465d-9acb-3a314299ad57-kube-api-access-6fckx\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.058770 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b31404c-f19e-465d-9acb-3a314299ad57-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b31404c-f19e-465d-9acb-3a314299ad57\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.109545 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.628819 4624 generic.go:334] "Generic (PLEG): container finished" podID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerID="1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51" exitCode=2 Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.629258 4624 generic.go:334] "Generic (PLEG): container finished" podID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerID="290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d" exitCode=0 Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.628887 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerDied","Data":"1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51"} Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.629329 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerDied","Data":"290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d"} Feb 28 03:57:59 crc kubenswrapper[4624]: I0228 03:57:59.705315 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.100980 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2f3514-5076-427b-b99d-92ac7a0a0fb3" path="/var/lib/kubelet/pods/3a2f3514-5076-427b-b99d-92ac7a0a0fb3/volumes" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.158011 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537518-ntcqp"] Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.159857 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.162574 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.163976 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.175228 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537518-ntcqp"] Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.180443 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.283105 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh5jr\" (UniqueName: \"kubernetes.io/projected/ef1a92b2-aaa8-4b8c-a947-47561d583f80-kube-api-access-mh5jr\") pod \"auto-csr-approver-29537518-ntcqp\" (UID: \"ef1a92b2-aaa8-4b8c-a947-47561d583f80\") " pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.385891 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh5jr\" (UniqueName: \"kubernetes.io/projected/ef1a92b2-aaa8-4b8c-a947-47561d583f80-kube-api-access-mh5jr\") pod \"auto-csr-approver-29537518-ntcqp\" (UID: \"ef1a92b2-aaa8-4b8c-a947-47561d583f80\") " pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.408710 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh5jr\" (UniqueName: \"kubernetes.io/projected/ef1a92b2-aaa8-4b8c-a947-47561d583f80-kube-api-access-mh5jr\") pod \"auto-csr-approver-29537518-ntcqp\" (UID: \"ef1a92b2-aaa8-4b8c-a947-47561d583f80\") " pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.479649 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.673140 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b31404c-f19e-465d-9acb-3a314299ad57","Type":"ContainerStarted","Data":"fd9fbf295df449af93c92be791c32b48f202a7d9aca39a90bb1c60b3de2b4013"} Feb 28 03:58:00 crc kubenswrapper[4624]: I0228 03:58:00.673631 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b31404c-f19e-465d-9acb-3a314299ad57","Type":"ContainerStarted","Data":"e1d5a05de3821013bb3f3699c59deb06fe7deef2d949066f689aafe6705e79d9"} Feb 28 03:58:01 crc kubenswrapper[4624]: I0228 03:58:01.080913 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.080890512 podStartE2EDuration="3.080890512s" podCreationTimestamp="2026-02-28 03:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:58:00.711666421 +0000 UTC m=+1335.375705730" watchObservedRunningTime="2026-02-28 03:58:01.080890512 +0000 UTC m=+1335.744929821" Feb 28 03:58:01 crc kubenswrapper[4624]: I0228 03:58:01.094721 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537518-ntcqp"] Feb 28 03:58:01 crc kubenswrapper[4624]: W0228 03:58:01.097344 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef1a92b2_aaa8_4b8c_a947_47561d583f80.slice/crio-42d4b896843b26b22b2e689a06a3fea10674d26c3c739b29540f015cb6021b93 WatchSource:0}: Error finding container 42d4b896843b26b22b2e689a06a3fea10674d26c3c739b29540f015cb6021b93: Status 404 returned error can't find the container with id 42d4b896843b26b22b2e689a06a3fea10674d26c3c739b29540f015cb6021b93 Feb 28 03:58:01 crc kubenswrapper[4624]: I0228 03:58:01.683628 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" event={"ID":"ef1a92b2-aaa8-4b8c-a947-47561d583f80","Type":"ContainerStarted","Data":"42d4b896843b26b22b2e689a06a3fea10674d26c3c739b29540f015cb6021b93"} Feb 28 03:58:02 crc kubenswrapper[4624]: I0228 03:58:02.704190 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" event={"ID":"ef1a92b2-aaa8-4b8c-a947-47561d583f80","Type":"ContainerStarted","Data":"c5b66eff707641ba64fd2231b152e2ea08b558cce285a77ac32fc2f7c4724472"} Feb 28 03:58:02 crc kubenswrapper[4624]: I0228 03:58:02.732745 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" podStartSLOduration=1.833866275 podStartE2EDuration="2.732723192s" podCreationTimestamp="2026-02-28 03:58:00 +0000 UTC" firstStartedPulling="2026-02-28 03:58:01.100541606 +0000 UTC m=+1335.764580915" lastFinishedPulling="2026-02-28 03:58:01.999398523 +0000 UTC m=+1336.663437832" observedRunningTime="2026-02-28 03:58:02.725072789 +0000 UTC m=+1337.389112098" watchObservedRunningTime="2026-02-28 03:58:02.732723192 +0000 UTC m=+1337.396762501" Feb 28 03:58:02 crc kubenswrapper[4624]: I0228 03:58:02.842565 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 03:58:02 crc kubenswrapper[4624]: I0228 03:58:02.843419 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 03:58:03 crc kubenswrapper[4624]: I0228 03:58:03.714931 4624 generic.go:334] "Generic (PLEG): container finished" podID="ef1a92b2-aaa8-4b8c-a947-47561d583f80" containerID="c5b66eff707641ba64fd2231b152e2ea08b558cce285a77ac32fc2f7c4724472" exitCode=0 Feb 28 03:58:03 crc kubenswrapper[4624]: I0228 03:58:03.714985 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" event={"ID":"ef1a92b2-aaa8-4b8c-a947-47561d583f80","Type":"ContainerDied","Data":"c5b66eff707641ba64fd2231b152e2ea08b558cce285a77ac32fc2f7c4724472"} Feb 28 03:58:03 crc kubenswrapper[4624]: I0228 03:58:03.855270 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:03 crc kubenswrapper[4624]: I0228 03:58:03.855305 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:04 crc kubenswrapper[4624]: I0228 03:58:04.110275 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.172555 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.248193 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.324332 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh5jr\" (UniqueName: \"kubernetes.io/projected/ef1a92b2-aaa8-4b8c-a947-47561d583f80-kube-api-access-mh5jr\") pod \"ef1a92b2-aaa8-4b8c-a947-47561d583f80\" (UID: \"ef1a92b2-aaa8-4b8c-a947-47561d583f80\") " Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.335622 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1a92b2-aaa8-4b8c-a947-47561d583f80-kube-api-access-mh5jr" (OuterVolumeSpecName: "kube-api-access-mh5jr") pod "ef1a92b2-aaa8-4b8c-a947-47561d583f80" (UID: "ef1a92b2-aaa8-4b8c-a947-47561d583f80"). InnerVolumeSpecName "kube-api-access-mh5jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.427062 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh5jr\" (UniqueName: \"kubernetes.io/projected/ef1a92b2-aaa8-4b8c-a947-47561d583f80-kube-api-access-mh5jr\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.740883 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" event={"ID":"ef1a92b2-aaa8-4b8c-a947-47561d583f80","Type":"ContainerDied","Data":"42d4b896843b26b22b2e689a06a3fea10674d26c3c739b29540f015cb6021b93"} Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.740970 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42d4b896843b26b22b2e689a06a3fea10674d26c3c739b29540f015cb6021b93" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.740995 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537518-ntcqp" Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.744294 4624 generic.go:334] "Generic (PLEG): container finished" podID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerID="2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df" exitCode=0 Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.744353 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerDied","Data":"2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df"} Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.814708 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537512-gk4pk"] Feb 28 03:58:05 crc kubenswrapper[4624]: I0228 03:58:05.825166 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537512-gk4pk"] Feb 28 03:58:06 crc kubenswrapper[4624]: I0228 03:58:06.102329 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f0bdad-f857-4411-a897-baf63edc11b3" path="/var/lib/kubelet/pods/f5f0bdad-f857-4411-a897-baf63edc11b3/volumes" Feb 28 03:58:07 crc kubenswrapper[4624]: I0228 03:58:07.594993 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 03:58:07 crc kubenswrapper[4624]: I0228 03:58:07.595893 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 03:58:07 crc kubenswrapper[4624]: I0228 03:58:07.596034 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 03:58:07 crc kubenswrapper[4624]: I0228 03:58:07.603915 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 03:58:07 crc kubenswrapper[4624]: I0228 03:58:07.766024 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 03:58:07 crc kubenswrapper[4624]: I0228 03:58:07.773651 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 03:58:09 crc kubenswrapper[4624]: I0228 03:58:09.110016 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:58:09 crc kubenswrapper[4624]: I0228 03:58:09.134891 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:58:09 crc kubenswrapper[4624]: I0228 03:58:09.812491 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.001831 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vwc78"] Feb 28 03:58:10 crc kubenswrapper[4624]: E0228 03:58:10.003175 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1a92b2-aaa8-4b8c-a947-47561d583f80" containerName="oc" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.007146 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1a92b2-aaa8-4b8c-a947-47561d583f80" containerName="oc" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.007889 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1a92b2-aaa8-4b8c-a947-47561d583f80" containerName="oc" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.009115 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.014582 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.018967 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.032006 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vwc78"] Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.145855 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.145922 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxk7\" (UniqueName: \"kubernetes.io/projected/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-kube-api-access-lmxk7\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.146038 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-config-data\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.146109 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-scripts\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.248629 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxk7\" (UniqueName: \"kubernetes.io/projected/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-kube-api-access-lmxk7\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.248844 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-config-data\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.248970 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-scripts\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.249126 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.256960 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-scripts\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.257072 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.265520 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-config-data\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.268654 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxk7\" (UniqueName: \"kubernetes.io/projected/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-kube-api-access-lmxk7\") pod \"nova-cell1-cell-mapping-vwc78\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.339208 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:10 crc kubenswrapper[4624]: I0228 03:58:10.855688 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vwc78"] Feb 28 03:58:11 crc kubenswrapper[4624]: I0228 03:58:11.804695 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vwc78" event={"ID":"cd1bfb27-f028-4b0c-a657-a8435c7fcf72","Type":"ContainerStarted","Data":"dcf59992b2c0e0b20cc344713b3563ab31f403ea59534ae3c75f1a6041d139ef"} Feb 28 03:58:11 crc kubenswrapper[4624]: I0228 03:58:11.804761 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vwc78" event={"ID":"cd1bfb27-f028-4b0c-a657-a8435c7fcf72","Type":"ContainerStarted","Data":"c974ddf2d824972d321865ed53b90d2ae75e35a9a5bbe7328d5277dc31a72fbf"} Feb 28 03:58:11 crc kubenswrapper[4624]: I0228 03:58:11.828543 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vwc78" podStartSLOduration=2.82852484 podStartE2EDuration="2.82852484s" podCreationTimestamp="2026-02-28 03:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:58:11.819904541 +0000 UTC m=+1346.483943850" watchObservedRunningTime="2026-02-28 03:58:11.82852484 +0000 UTC m=+1346.492564149" Feb 28 03:58:12 crc kubenswrapper[4624]: I0228 03:58:12.850698 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 03:58:12 crc kubenswrapper[4624]: I0228 03:58:12.856793 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 03:58:12 crc kubenswrapper[4624]: I0228 03:58:12.860212 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 03:58:13 crc kubenswrapper[4624]: I0228 03:58:13.856661 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.852242 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerID="93a0882fc2c46a1d7ae2f33f5d46e75723aa417d1b35f6b58392edea81f40dce" exitCode=137 Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.852258 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerDied","Data":"93a0882fc2c46a1d7ae2f33f5d46e75723aa417d1b35f6b58392edea81f40dce"} Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.853182 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerStarted","Data":"165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3"} Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.853260 4624 scope.go:117] "RemoveContainer" containerID="7f2e0d50f88199a063c27e79239b0538235bbf6111c30dfd8d0c32d33e145b7c" Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.858257 4624 generic.go:334] "Generic (PLEG): container finished" podID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerID="044c0669362491674e475d7359b88b571d3a8160c975f83c5501cc2c9a104e26" exitCode=137 Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.858373 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerDied","Data":"044c0669362491674e475d7359b88b571d3a8160c975f83c5501cc2c9a104e26"} Feb 28 03:58:14 crc kubenswrapper[4624]: I0228 03:58:14.859757 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988c5cd-svksm" event={"ID":"6ccc2a9a-c3cc-4ddb-a700-86713957337e","Type":"ContainerStarted","Data":"701ecd466027f121da3edd3e836c0c3636dcfe9be9fe8b1e760f7c7bda78124b"} Feb 28 03:58:15 crc kubenswrapper[4624]: I0228 03:58:15.053031 4624 scope.go:117] "RemoveContainer" containerID="940754d093426155a0bfd9f597844251410b1b7303859424203ebbb0061de2e3" Feb 28 03:58:16 crc kubenswrapper[4624]: I0228 03:58:16.894024 4624 generic.go:334] "Generic (PLEG): container finished" podID="cd1bfb27-f028-4b0c-a657-a8435c7fcf72" containerID="dcf59992b2c0e0b20cc344713b3563ab31f403ea59534ae3c75f1a6041d139ef" exitCode=0 Feb 28 03:58:16 crc kubenswrapper[4624]: I0228 03:58:16.894176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vwc78" event={"ID":"cd1bfb27-f028-4b0c-a657-a8435c7fcf72","Type":"ContainerDied","Data":"dcf59992b2c0e0b20cc344713b3563ab31f403ea59534ae3c75f1a6041d139ef"} Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.269348 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.463928 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-combined-ca-bundle\") pod \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.464245 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-scripts\") pod \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.464301 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmxk7\" (UniqueName: \"kubernetes.io/projected/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-kube-api-access-lmxk7\") pod \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.464470 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-config-data\") pod \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\" (UID: \"cd1bfb27-f028-4b0c-a657-a8435c7fcf72\") " Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.472111 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-scripts" (OuterVolumeSpecName: "scripts") pod "cd1bfb27-f028-4b0c-a657-a8435c7fcf72" (UID: "cd1bfb27-f028-4b0c-a657-a8435c7fcf72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.472131 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-kube-api-access-lmxk7" (OuterVolumeSpecName: "kube-api-access-lmxk7") pod "cd1bfb27-f028-4b0c-a657-a8435c7fcf72" (UID: "cd1bfb27-f028-4b0c-a657-a8435c7fcf72"). InnerVolumeSpecName "kube-api-access-lmxk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.499852 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd1bfb27-f028-4b0c-a657-a8435c7fcf72" (UID: "cd1bfb27-f028-4b0c-a657-a8435c7fcf72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.501179 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-config-data" (OuterVolumeSpecName: "config-data") pod "cd1bfb27-f028-4b0c-a657-a8435c7fcf72" (UID: "cd1bfb27-f028-4b0c-a657-a8435c7fcf72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.567516 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.567556 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.567574 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.567586 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmxk7\" (UniqueName: \"kubernetes.io/projected/cd1bfb27-f028-4b0c-a657-a8435c7fcf72-kube-api-access-lmxk7\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.917859 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vwc78" event={"ID":"cd1bfb27-f028-4b0c-a657-a8435c7fcf72","Type":"ContainerDied","Data":"c974ddf2d824972d321865ed53b90d2ae75e35a9a5bbe7328d5277dc31a72fbf"} Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.917909 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c974ddf2d824972d321865ed53b90d2ae75e35a9a5bbe7328d5277dc31a72fbf" Feb 28 03:58:18 crc kubenswrapper[4624]: I0228 03:58:18.917980 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vwc78" Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.124502 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.124920 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-log" containerID="cri-o://28fdeb9ea94d80b6fb301f94b399ffea3577fa9edca515ab69fbe5fa9d1de980" gracePeriod=30 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.125189 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-api" containerID="cri-o://4bc29c59a71404544a0cb3a7fc7a8eff69997d8970dd25cd885e43b65c9c9da0" gracePeriod=30 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.163289 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.163642 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e5689b27-efcc-4238-ab31-f85edaa239d6" containerName="nova-scheduler-scheduler" containerID="cri-o://5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" gracePeriod=30 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.204168 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.204499 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-log" containerID="cri-o://377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3" gracePeriod=30 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.204693 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-metadata" containerID="cri-o://33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5" gracePeriod=30 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.540586 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.540665 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.934326 4624 generic.go:334] "Generic (PLEG): container finished" podID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerID="377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3" exitCode=143 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.934421 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edbc8a68-6254-4b10-8baa-ef87dbc48031","Type":"ContainerDied","Data":"377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3"} Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.940122 4624 generic.go:334] "Generic (PLEG): container finished" podID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerID="28fdeb9ea94d80b6fb301f94b399ffea3577fa9edca515ab69fbe5fa9d1de980" exitCode=143 Feb 28 03:58:19 crc kubenswrapper[4624]: I0228 03:58:19.940179 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f19a482f-ce28-4c03-8976-6fcf560499aa","Type":"ContainerDied","Data":"28fdeb9ea94d80b6fb301f94b399ffea3577fa9edca515ab69fbe5fa9d1de980"} Feb 28 03:58:20 crc kubenswrapper[4624]: I0228 03:58:20.177317 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.962485 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.979859 4624 generic.go:334] "Generic (PLEG): container finished" podID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerID="33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5" exitCode=0 Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.979927 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edbc8a68-6254-4b10-8baa-ef87dbc48031","Type":"ContainerDied","Data":"33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5"} Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.979958 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edbc8a68-6254-4b10-8baa-ef87dbc48031","Type":"ContainerDied","Data":"5ce7dbbc50b71e1372f5d8f8ba70f2dfea7cb2467b246cca7711e0ae02ba1416"} Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.979977 4624 scope.go:117] "RemoveContainer" containerID="33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5" Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.980188 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.995261 4624 generic.go:334] "Generic (PLEG): container finished" podID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerID="4bc29c59a71404544a0cb3a7fc7a8eff69997d8970dd25cd885e43b65c9c9da0" exitCode=0 Feb 28 03:58:22 crc kubenswrapper[4624]: I0228 03:58:22.995311 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f19a482f-ce28-4c03-8976-6fcf560499aa","Type":"ContainerDied","Data":"4bc29c59a71404544a0cb3a7fc7a8eff69997d8970dd25cd885e43b65c9c9da0"} Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.008253 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdfvk\" (UniqueName: \"kubernetes.io/projected/edbc8a68-6254-4b10-8baa-ef87dbc48031-kube-api-access-wdfvk\") pod \"edbc8a68-6254-4b10-8baa-ef87dbc48031\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.008782 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc8a68-6254-4b10-8baa-ef87dbc48031-logs\") pod \"edbc8a68-6254-4b10-8baa-ef87dbc48031\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.008839 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-combined-ca-bundle\") pod \"edbc8a68-6254-4b10-8baa-ef87dbc48031\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.008939 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-config-data\") pod \"edbc8a68-6254-4b10-8baa-ef87dbc48031\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.009041 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-nova-metadata-tls-certs\") pod \"edbc8a68-6254-4b10-8baa-ef87dbc48031\" (UID: \"edbc8a68-6254-4b10-8baa-ef87dbc48031\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.013526 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edbc8a68-6254-4b10-8baa-ef87dbc48031-logs" (OuterVolumeSpecName: "logs") pod "edbc8a68-6254-4b10-8baa-ef87dbc48031" (UID: "edbc8a68-6254-4b10-8baa-ef87dbc48031"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.030432 4624 scope.go:117] "RemoveContainer" containerID="377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.092318 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbc8a68-6254-4b10-8baa-ef87dbc48031-kube-api-access-wdfvk" (OuterVolumeSpecName: "kube-api-access-wdfvk") pod "edbc8a68-6254-4b10-8baa-ef87dbc48031" (UID: "edbc8a68-6254-4b10-8baa-ef87dbc48031"). InnerVolumeSpecName "kube-api-access-wdfvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.112499 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edbc8a68-6254-4b10-8baa-ef87dbc48031" (UID: "edbc8a68-6254-4b10-8baa-ef87dbc48031"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.116749 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdfvk\" (UniqueName: \"kubernetes.io/projected/edbc8a68-6254-4b10-8baa-ef87dbc48031-kube-api-access-wdfvk\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.116788 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbc8a68-6254-4b10-8baa-ef87dbc48031-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.116799 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.144663 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-config-data" (OuterVolumeSpecName: "config-data") pod "edbc8a68-6254-4b10-8baa-ef87dbc48031" (UID: "edbc8a68-6254-4b10-8baa-ef87dbc48031"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.168332 4624 scope.go:117] "RemoveContainer" containerID="33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.175140 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5\": container with ID starting with 33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5 not found: ID does not exist" containerID="33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.175206 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5"} err="failed to get container status \"33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5\": rpc error: code = NotFound desc = could not find container \"33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5\": container with ID starting with 33dadba448ca5042c27e8d880a3f2a1b149eb898bf8a1bbd8a29abf091522ca5 not found: ID does not exist" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.175252 4624 scope.go:117] "RemoveContainer" containerID="377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.175701 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3\": container with ID starting with 377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3 not found: ID does not exist" containerID="377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.175745 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3"} err="failed to get container status \"377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3\": rpc error: code = NotFound desc = could not find container \"377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3\": container with ID starting with 377933fc11c3a827bf66199ebe91d54af49c50a1c1d88bfb090b21d7b1bca8f3 not found: ID does not exist" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.197368 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "edbc8a68-6254-4b10-8baa-ef87dbc48031" (UID: "edbc8a68-6254-4b10-8baa-ef87dbc48031"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.221817 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.222873 4624 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbc8a68-6254-4b10-8baa-ef87dbc48031-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.226315 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.324341 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-config-data\") pod \"f19a482f-ce28-4c03-8976-6fcf560499aa\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.324517 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-internal-tls-certs\") pod \"f19a482f-ce28-4c03-8976-6fcf560499aa\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.324562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-public-tls-certs\") pod \"f19a482f-ce28-4c03-8976-6fcf560499aa\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.324933 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19a482f-ce28-4c03-8976-6fcf560499aa-logs\") pod \"f19a482f-ce28-4c03-8976-6fcf560499aa\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.324967 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcfgg\" (UniqueName: \"kubernetes.io/projected/f19a482f-ce28-4c03-8976-6fcf560499aa-kube-api-access-xcfgg\") pod \"f19a482f-ce28-4c03-8976-6fcf560499aa\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.325004 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-combined-ca-bundle\") pod \"f19a482f-ce28-4c03-8976-6fcf560499aa\" (UID: \"f19a482f-ce28-4c03-8976-6fcf560499aa\") " Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.325527 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19a482f-ce28-4c03-8976-6fcf560499aa-logs" (OuterVolumeSpecName: "logs") pod "f19a482f-ce28-4c03-8976-6fcf560499aa" (UID: "f19a482f-ce28-4c03-8976-6fcf560499aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.326632 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19a482f-ce28-4c03-8976-6fcf560499aa-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.349447 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19a482f-ce28-4c03-8976-6fcf560499aa-kube-api-access-xcfgg" (OuterVolumeSpecName: "kube-api-access-xcfgg") pod "f19a482f-ce28-4c03-8976-6fcf560499aa" (UID: "f19a482f-ce28-4c03-8976-6fcf560499aa"). InnerVolumeSpecName "kube-api-access-xcfgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.374199 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.388945 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f19a482f-ce28-4c03-8976-6fcf560499aa" (UID: "f19a482f-ce28-4c03-8976-6fcf560499aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.405435 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.416473 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-config-data" (OuterVolumeSpecName: "config-data") pod "f19a482f-ce28-4c03-8976-6fcf560499aa" (UID: "f19a482f-ce28-4c03-8976-6fcf560499aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.421996 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.422474 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-api" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422487 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-api" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.422515 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1bfb27-f028-4b0c-a657-a8435c7fcf72" containerName="nova-manage" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422522 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1bfb27-f028-4b0c-a657-a8435c7fcf72" containerName="nova-manage" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.422546 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-log" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422552 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-log" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.422562 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-log" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422568 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-log" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.422581 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-metadata" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422589 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-metadata" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422768 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-api" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422779 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" containerName="nova-api-log" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422794 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-log" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422803 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-metadata" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.422815 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1bfb27-f028-4b0c-a657-a8435c7fcf72" containerName="nova-manage" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.424112 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.429564 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.429595 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcfgg\" (UniqueName: \"kubernetes.io/projected/f19a482f-ce28-4c03-8976-6fcf560499aa-kube-api-access-xcfgg\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.429606 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.437576 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f19a482f-ce28-4c03-8976-6fcf560499aa" (UID: "f19a482f-ce28-4c03-8976-6fcf560499aa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.438267 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.441069 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.456486 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.464169 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.468560 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f19a482f-ce28-4c03-8976-6fcf560499aa" (UID: "f19a482f-ce28-4c03-8976-6fcf560499aa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.468795 4624 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 03:58:23 crc kubenswrapper[4624]: E0228 03:58:23.468899 4624 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e5689b27-efcc-4238-ab31-f85edaa239d6" containerName="nova-scheduler-scheduler" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.474726 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532371 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-config-data\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532451 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjd77\" (UniqueName: \"kubernetes.io/projected/84e801c6-735b-4858-81d4-2dac7c9eba75-kube-api-access-jjd77\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532494 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532555 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532585 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e801c6-735b-4858-81d4-2dac7c9eba75-logs\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532707 4624 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.532724 4624 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19a482f-ce28-4c03-8976-6fcf560499aa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.635306 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.635371 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e801c6-735b-4858-81d4-2dac7c9eba75-logs\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.635526 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-config-data\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.635556 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjd77\" (UniqueName: \"kubernetes.io/projected/84e801c6-735b-4858-81d4-2dac7c9eba75-kube-api-access-jjd77\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.635591 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.636368 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e801c6-735b-4858-81d4-2dac7c9eba75-logs\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.644357 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.644805 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-config-data\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.656239 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjd77\" (UniqueName: \"kubernetes.io/projected/84e801c6-735b-4858-81d4-2dac7c9eba75-kube-api-access-jjd77\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.663931 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84e801c6-735b-4858-81d4-2dac7c9eba75-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84e801c6-735b-4858-81d4-2dac7c9eba75\") " pod="openstack/nova-metadata-0" Feb 28 03:58:23 crc kubenswrapper[4624]: I0228 03:58:23.771425 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.030594 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f19a482f-ce28-4c03-8976-6fcf560499aa","Type":"ContainerDied","Data":"ae61b7765f68d341ef0a06b48d9a8c1f3fe0f5100871ac010eb5f9ed59fea30a"} Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.030971 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.031017 4624 scope.go:117] "RemoveContainer" containerID="4bc29c59a71404544a0cb3a7fc7a8eff69997d8970dd25cd885e43b65c9c9da0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.115419 4624 scope.go:117] "RemoveContainer" containerID="28fdeb9ea94d80b6fb301f94b399ffea3577fa9edca515ab69fbe5fa9d1de980" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.129001 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" path="/var/lib/kubelet/pods/edbc8a68-6254-4b10-8baa-ef87dbc48031/volumes" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.135397 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.148631 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.150356 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.155940 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.159986 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.185155 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.187250 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.193699 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.194017 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.206980 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.222885 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.258260 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.258366 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78266014-e4d1-459b-b48f-a8b21a17cce3-logs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.258419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.258464 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-public-tls-certs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.258587 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6nw\" (UniqueName: \"kubernetes.io/projected/78266014-e4d1-459b-b48f-a8b21a17cce3-kube-api-access-9f6nw\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.258635 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-config-data\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.301679 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.322949 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.323668 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.327278 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.361888 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.362697 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78266014-e4d1-459b-b48f-a8b21a17cce3-logs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.362863 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.363057 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-public-tls-certs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.363399 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78266014-e4d1-459b-b48f-a8b21a17cce3-logs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.363408 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6nw\" (UniqueName: \"kubernetes.io/projected/78266014-e4d1-459b-b48f-a8b21a17cce3-kube-api-access-9f6nw\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.363643 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-config-data\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.373861 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-public-tls-certs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.379834 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-config-data\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.383999 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.391850 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78266014-e4d1-459b-b48f-a8b21a17cce3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.392415 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6nw\" (UniqueName: \"kubernetes.io/projected/78266014-e4d1-459b-b48f-a8b21a17cce3-kube-api-access-9f6nw\") pod \"nova-api-0\" (UID: \"78266014-e4d1-459b-b48f-a8b21a17cce3\") " pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.549847 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 03:58:24 crc kubenswrapper[4624]: I0228 03:58:24.990038 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:24.998940 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.083685 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkhlb\" (UniqueName: \"kubernetes.io/projected/e5689b27-efcc-4238-ab31-f85edaa239d6-kube-api-access-mkhlb\") pod \"e5689b27-efcc-4238-ab31-f85edaa239d6\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.083944 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-config-data\") pod \"e5689b27-efcc-4238-ab31-f85edaa239d6\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.084194 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-combined-ca-bundle\") pod \"e5689b27-efcc-4238-ab31-f85edaa239d6\" (UID: \"e5689b27-efcc-4238-ab31-f85edaa239d6\") " Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.086195 4624 generic.go:334] "Generic (PLEG): container finished" podID="e5689b27-efcc-4238-ab31-f85edaa239d6" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" exitCode=0 Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.086381 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5689b27-efcc-4238-ab31-f85edaa239d6","Type":"ContainerDied","Data":"5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2"} Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.086417 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5689b27-efcc-4238-ab31-f85edaa239d6","Type":"ContainerDied","Data":"b46e127717358a610eb9982f150d72b489c019a3d9e7d9849e2a88fd915f8af2"} Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.086439 4624 scope.go:117] "RemoveContainer" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.086630 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.102330 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5689b27-efcc-4238-ab31-f85edaa239d6-kube-api-access-mkhlb" (OuterVolumeSpecName: "kube-api-access-mkhlb") pod "e5689b27-efcc-4238-ab31-f85edaa239d6" (UID: "e5689b27-efcc-4238-ab31-f85edaa239d6"). InnerVolumeSpecName "kube-api-access-mkhlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.104010 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78266014-e4d1-459b-b48f-a8b21a17cce3","Type":"ContainerStarted","Data":"391e405af4e1951b48b9dc12d11badc37e66aadf8240d69afad8385f7456e49f"} Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.137554 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e801c6-735b-4858-81d4-2dac7c9eba75","Type":"ContainerStarted","Data":"0126dd7aa1c1e1c12d9ff305967f07cdb3b4f93badc53ff9f40db7e88db9ff8d"} Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.137612 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e801c6-735b-4858-81d4-2dac7c9eba75","Type":"ContainerStarted","Data":"9598f4b8d479e8482dc3e743c06e56da567a1dc5cf2278064a86705602e3c944"} Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.137625 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e801c6-735b-4858-81d4-2dac7c9eba75","Type":"ContainerStarted","Data":"2348ec585737a4d3b8e938731e678378a9f7a3b477bb2a4a351327b623d414d7"} Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.144822 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-config-data" (OuterVolumeSpecName: "config-data") pod "e5689b27-efcc-4238-ab31-f85edaa239d6" (UID: "e5689b27-efcc-4238-ab31-f85edaa239d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.147725 4624 scope.go:117] "RemoveContainer" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" Feb 28 03:58:25 crc kubenswrapper[4624]: E0228 03:58:25.148455 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2\": container with ID starting with 5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2 not found: ID does not exist" containerID="5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.148492 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2"} err="failed to get container status \"5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2\": rpc error: code = NotFound desc = could not find container \"5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2\": container with ID starting with 5f04cccd3311583029b900b8c33ef02b4ebebda6a483e92cdaa3edd93c8610a2 not found: ID does not exist" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.160580 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.160562094 podStartE2EDuration="2.160562094s" podCreationTimestamp="2026-02-28 03:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:58:25.159156266 +0000 UTC m=+1359.823195575" watchObservedRunningTime="2026-02-28 03:58:25.160562094 +0000 UTC m=+1359.824601403" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.171260 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5689b27-efcc-4238-ab31-f85edaa239d6" (UID: "e5689b27-efcc-4238-ab31-f85edaa239d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.188678 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.188709 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5689b27-efcc-4238-ab31-f85edaa239d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.188741 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkhlb\" (UniqueName: \"kubernetes.io/projected/e5689b27-efcc-4238-ab31-f85edaa239d6-kube-api-access-mkhlb\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.424377 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.434587 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.460946 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:58:25 crc kubenswrapper[4624]: E0228 03:58:25.461567 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5689b27-efcc-4238-ab31-f85edaa239d6" containerName="nova-scheduler-scheduler" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.461597 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5689b27-efcc-4238-ab31-f85edaa239d6" containerName="nova-scheduler-scheduler" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.461860 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5689b27-efcc-4238-ab31-f85edaa239d6" containerName="nova-scheduler-scheduler" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.462879 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.466663 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.481947 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.602919 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-config-data\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.603529 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.603579 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dkf\" (UniqueName: \"kubernetes.io/projected/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-kube-api-access-s6dkf\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.705878 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-config-data\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.705971 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.705998 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dkf\" (UniqueName: \"kubernetes.io/projected/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-kube-api-access-s6dkf\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.712569 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-config-data\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.713958 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.737280 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dkf\" (UniqueName: \"kubernetes.io/projected/82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6-kube-api-access-s6dkf\") pod \"nova-scheduler-0\" (UID: \"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6\") " pod="openstack/nova-scheduler-0" Feb 28 03:58:25 crc kubenswrapper[4624]: I0228 03:58:25.780903 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 03:58:26 crc kubenswrapper[4624]: I0228 03:58:26.104349 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5689b27-efcc-4238-ab31-f85edaa239d6" path="/var/lib/kubelet/pods/e5689b27-efcc-4238-ab31-f85edaa239d6/volumes" Feb 28 03:58:26 crc kubenswrapper[4624]: I0228 03:58:26.105719 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19a482f-ce28-4c03-8976-6fcf560499aa" path="/var/lib/kubelet/pods/f19a482f-ce28-4c03-8976-6fcf560499aa/volumes" Feb 28 03:58:26 crc kubenswrapper[4624]: I0228 03:58:26.165694 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78266014-e4d1-459b-b48f-a8b21a17cce3","Type":"ContainerStarted","Data":"1bf4cb9bda49f87e2bf7851d0c8f08dd017d80d275a427fd48dca52323dbbfc0"} Feb 28 03:58:26 crc kubenswrapper[4624]: I0228 03:58:26.165826 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78266014-e4d1-459b-b48f-a8b21a17cce3","Type":"ContainerStarted","Data":"afdfabd19cc010ea2401231eb7977d96e8af25d51fee968c81f436d35f19857f"} Feb 28 03:58:26 crc kubenswrapper[4624]: I0228 03:58:26.207276 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.207259129 podStartE2EDuration="2.207259129s" podCreationTimestamp="2026-02-28 03:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:58:26.205478101 +0000 UTC m=+1360.869517410" watchObservedRunningTime="2026-02-28 03:58:26.207259129 +0000 UTC m=+1360.871298438" Feb 28 03:58:26 crc kubenswrapper[4624]: I0228 03:58:26.295760 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 03:58:27 crc kubenswrapper[4624]: I0228 03:58:27.178268 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6","Type":"ContainerStarted","Data":"2c538755e948f931f796667ad1e0109230747d9c873a64e744b742dbe1d2023f"} Feb 28 03:58:27 crc kubenswrapper[4624]: I0228 03:58:27.178718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6","Type":"ContainerStarted","Data":"8021903b91b2b2a2120e3c569fd618e6b2dc81f0198b0cee3485309a8a2f0957"} Feb 28 03:58:27 crc kubenswrapper[4624]: I0228 03:58:27.212738 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.212709114 podStartE2EDuration="2.212709114s" podCreationTimestamp="2026-02-28 03:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:58:27.204337982 +0000 UTC m=+1361.868377301" watchObservedRunningTime="2026-02-28 03:58:27.212709114 +0000 UTC m=+1361.876748423" Feb 28 03:58:27 crc kubenswrapper[4624]: I0228 03:58:27.842513 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:27 crc kubenswrapper[4624]: I0228 03:58:27.842791 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="edbc8a68-6254-4b10-8baa-ef87dbc48031" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:28 crc kubenswrapper[4624]: I0228 03:58:28.794586 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:58:28 crc kubenswrapper[4624]: I0228 03:58:28.795937 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.129867 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.200270 4624 generic.go:334] "Generic (PLEG): container finished" podID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerID="3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d" exitCode=137 Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.200348 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerDied","Data":"3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d"} Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.200435 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85e63197-e526-4c05-9dfa-b65b4aac331f","Type":"ContainerDied","Data":"80a733fe92b51a5ad4f531db8e90a9be4785431885f23216ce16f9833534d861"} Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.200464 4624 scope.go:117] "RemoveContainer" containerID="3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.200376 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208313 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxxbx\" (UniqueName: \"kubernetes.io/projected/85e63197-e526-4c05-9dfa-b65b4aac331f-kube-api-access-dxxbx\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-config-data\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208423 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-log-httpd\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208581 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-run-httpd\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208688 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-sg-core-conf-yaml\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208746 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-scripts\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.208848 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-combined-ca-bundle\") pod \"85e63197-e526-4c05-9dfa-b65b4aac331f\" (UID: \"85e63197-e526-4c05-9dfa-b65b4aac331f\") " Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.209352 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.210403 4624 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.210864 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.229866 4624 scope.go:117] "RemoveContainer" containerID="1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.232934 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e63197-e526-4c05-9dfa-b65b4aac331f-kube-api-access-dxxbx" (OuterVolumeSpecName: "kube-api-access-dxxbx") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "kube-api-access-dxxbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.233920 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-scripts" (OuterVolumeSpecName: "scripts") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.262836 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.312445 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxxbx\" (UniqueName: \"kubernetes.io/projected/85e63197-e526-4c05-9dfa-b65b4aac331f-kube-api-access-dxxbx\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.312478 4624 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85e63197-e526-4c05-9dfa-b65b4aac331f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.312489 4624 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.312499 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.341392 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.342031 4624 scope.go:117] "RemoveContainer" containerID="290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.363367 4624 scope.go:117] "RemoveContainer" containerID="2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.368839 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-config-data" (OuterVolumeSpecName: "config-data") pod "85e63197-e526-4c05-9dfa-b65b4aac331f" (UID: "85e63197-e526-4c05-9dfa-b65b4aac331f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.389343 4624 scope.go:117] "RemoveContainer" containerID="3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.389816 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d\": container with ID starting with 3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d not found: ID does not exist" containerID="3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.389860 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d"} err="failed to get container status \"3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d\": rpc error: code = NotFound desc = could not find container \"3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d\": container with ID starting with 3305a4f00db96ff303a086c42c356ce23c13302317f4ee65272544c120376e9d not found: ID does not exist" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.389917 4624 scope.go:117] "RemoveContainer" containerID="1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.390373 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51\": container with ID starting with 1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51 not found: ID does not exist" containerID="1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.390401 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51"} err="failed to get container status \"1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51\": rpc error: code = NotFound desc = could not find container \"1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51\": container with ID starting with 1c5a0c6987241a16a89b36ba012022955900cebf79b16f6acaeb3f498a51be51 not found: ID does not exist" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.390419 4624 scope.go:117] "RemoveContainer" containerID="290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.390715 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d\": container with ID starting with 290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d not found: ID does not exist" containerID="290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.390748 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d"} err="failed to get container status \"290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d\": rpc error: code = NotFound desc = could not find container \"290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d\": container with ID starting with 290be46f29b4fabfadd3c2d86caf3f76846ae3211616cc1a02cacdce049fa19d not found: ID does not exist" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.390761 4624 scope.go:117] "RemoveContainer" containerID="2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.391160 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df\": container with ID starting with 2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df not found: ID does not exist" containerID="2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.391185 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df"} err="failed to get container status \"2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df\": rpc error: code = NotFound desc = could not find container \"2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df\": container with ID starting with 2b50432958e8d17e1a5c85809026f4fc2799ce9d5d8b08c266c98837d7e346df not found: ID does not exist" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.414103 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.414124 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e63197-e526-4c05-9dfa-b65b4aac331f-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.539123 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.547036 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.576240 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.577035 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-notification-agent" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577070 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-notification-agent" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.577109 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="sg-core" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577120 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="sg-core" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.577173 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-central-agent" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577182 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-central-agent" Feb 28 03:58:29 crc kubenswrapper[4624]: E0228 03:58:29.577204 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="proxy-httpd" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577215 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="proxy-httpd" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577448 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-notification-agent" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577478 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="ceilometer-central-agent" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577494 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="sg-core" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.577509 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" containerName="proxy-httpd" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.579959 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.582368 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.585248 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.585343 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.597388 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.724798 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/954e38ba-b661-4225-b29e-5c2b4a1b8675-run-httpd\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.724868 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.724925 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.725142 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-scripts\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.725214 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g52g\" (UniqueName: \"kubernetes.io/projected/954e38ba-b661-4225-b29e-5c2b4a1b8675-kube-api-access-5g52g\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.725289 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.725383 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/954e38ba-b661-4225-b29e-5c2b4a1b8675-log-httpd\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.725428 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-config-data\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827078 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/954e38ba-b661-4225-b29e-5c2b4a1b8675-log-httpd\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827143 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-config-data\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827249 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/954e38ba-b661-4225-b29e-5c2b4a1b8675-run-httpd\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827283 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827310 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827345 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-scripts\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827372 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g52g\" (UniqueName: \"kubernetes.io/projected/954e38ba-b661-4225-b29e-5c2b4a1b8675-kube-api-access-5g52g\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.827398 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.828156 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/954e38ba-b661-4225-b29e-5c2b4a1b8675-run-httpd\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.828742 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/954e38ba-b661-4225-b29e-5c2b4a1b8675-log-httpd\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.833708 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.836770 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-scripts\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.837859 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.839714 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.848342 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/954e38ba-b661-4225-b29e-5c2b4a1b8675-config-data\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.851613 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g52g\" (UniqueName: \"kubernetes.io/projected/954e38ba-b661-4225-b29e-5c2b4a1b8675-kube-api-access-5g52g\") pod \"ceilometer-0\" (UID: \"954e38ba-b661-4225-b29e-5c2b4a1b8675\") " pod="openstack/ceilometer-0" Feb 28 03:58:29 crc kubenswrapper[4624]: I0228 03:58:29.911498 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 03:58:30 crc kubenswrapper[4624]: I0228 03:58:30.110776 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e63197-e526-4c05-9dfa-b65b4aac331f" path="/var/lib/kubelet/pods/85e63197-e526-4c05-9dfa-b65b4aac331f/volumes" Feb 28 03:58:30 crc kubenswrapper[4624]: I0228 03:58:30.464697 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 03:58:30 crc kubenswrapper[4624]: I0228 03:58:30.781929 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 03:58:31 crc kubenswrapper[4624]: I0228 03:58:31.228863 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"954e38ba-b661-4225-b29e-5c2b4a1b8675","Type":"ContainerStarted","Data":"4ecee782b24975fd947916b91202e1be1cae39029d139604bc923baac3986eb7"} Feb 28 03:58:31 crc kubenswrapper[4624]: I0228 03:58:31.229297 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"954e38ba-b661-4225-b29e-5c2b4a1b8675","Type":"ContainerStarted","Data":"4a0e37a7927720d1ebd4fd8fe9fa3966329e5255acda4c2fcb19d19cdd253141"} Feb 28 03:58:32 crc kubenswrapper[4624]: I0228 03:58:32.242192 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"954e38ba-b661-4225-b29e-5c2b4a1b8675","Type":"ContainerStarted","Data":"e904b75ce13143af68969c707d1d3c347c006538dd485d52e7b353439c3e87b4"} Feb 28 03:58:33 crc kubenswrapper[4624]: I0228 03:58:33.256298 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"954e38ba-b661-4225-b29e-5c2b4a1b8675","Type":"ContainerStarted","Data":"c93db1cbe35bc64d303b038bd1eddce9ca3b2fcaabe5630157e9e6f6be4acc69"} Feb 28 03:58:33 crc kubenswrapper[4624]: I0228 03:58:33.772499 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 03:58:33 crc kubenswrapper[4624]: I0228 03:58:33.772977 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 03:58:34 crc kubenswrapper[4624]: I0228 03:58:34.155294 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:58:34 crc kubenswrapper[4624]: I0228 03:58:34.323310 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988c5cd-svksm" podUID="6ccc2a9a-c3cc-4ddb-a700-86713957337e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 03:58:34 crc kubenswrapper[4624]: I0228 03:58:34.550772 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 03:58:34 crc kubenswrapper[4624]: I0228 03:58:34.551333 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 03:58:34 crc kubenswrapper[4624]: I0228 03:58:34.792318 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="84e801c6-735b-4858-81d4-2dac7c9eba75" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:34 crc kubenswrapper[4624]: I0228 03:58:34.792666 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="84e801c6-735b-4858-81d4-2dac7c9eba75" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:35 crc kubenswrapper[4624]: I0228 03:58:35.283711 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"954e38ba-b661-4225-b29e-5c2b4a1b8675","Type":"ContainerStarted","Data":"9cccd386f13a023196bf3e08128971172e3eb009cc99e2a2605356790740be04"} Feb 28 03:58:35 crc kubenswrapper[4624]: I0228 03:58:35.285492 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 03:58:35 crc kubenswrapper[4624]: I0228 03:58:35.332753 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7618746720000003 podStartE2EDuration="6.332718396s" podCreationTimestamp="2026-02-28 03:58:29 +0000 UTC" firstStartedPulling="2026-02-28 03:58:30.472455334 +0000 UTC m=+1365.136494643" lastFinishedPulling="2026-02-28 03:58:34.043299048 +0000 UTC m=+1368.707338367" observedRunningTime="2026-02-28 03:58:35.313874445 +0000 UTC m=+1369.977913754" watchObservedRunningTime="2026-02-28 03:58:35.332718396 +0000 UTC m=+1369.996757705" Feb 28 03:58:35 crc kubenswrapper[4624]: I0228 03:58:35.782484 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 03:58:35 crc kubenswrapper[4624]: I0228 03:58:35.823897 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 03:58:36 crc kubenswrapper[4624]: I0228 03:58:36.050346 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="78266014-e4d1-459b-b48f-a8b21a17cce3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:36 crc kubenswrapper[4624]: I0228 03:58:36.050369 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="78266014-e4d1-459b-b48f-a8b21a17cce3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 03:58:36 crc kubenswrapper[4624]: I0228 03:58:36.339531 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 03:58:43 crc kubenswrapper[4624]: I0228 03:58:43.776698 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 03:58:43 crc kubenswrapper[4624]: I0228 03:58:43.779096 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 03:58:43 crc kubenswrapper[4624]: I0228 03:58:43.783345 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 03:58:44 crc kubenswrapper[4624]: I0228 03:58:44.395252 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 03:58:44 crc kubenswrapper[4624]: I0228 03:58:44.559285 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 03:58:44 crc kubenswrapper[4624]: I0228 03:58:44.560961 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 03:58:44 crc kubenswrapper[4624]: I0228 03:58:44.569813 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 03:58:44 crc kubenswrapper[4624]: I0228 03:58:44.570694 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 03:58:45 crc kubenswrapper[4624]: I0228 03:58:45.399114 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 03:58:45 crc kubenswrapper[4624]: I0228 03:58:45.412165 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 03:58:47 crc kubenswrapper[4624]: I0228 03:58:47.063346 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:58:47 crc kubenswrapper[4624]: I0228 03:58:47.066678 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:58:48 crc kubenswrapper[4624]: I0228 03:58:48.824109 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cc988c5cd-svksm" Feb 28 03:58:48 crc kubenswrapper[4624]: I0228 03:58:48.932680 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b4bc59cd8-fkd4p"] Feb 28 03:58:48 crc kubenswrapper[4624]: I0228 03:58:48.932982 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon-log" containerID="cri-o://23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2" gracePeriod=30 Feb 28 03:58:48 crc kubenswrapper[4624]: I0228 03:58:48.934454 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" containerID="cri-o://165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3" gracePeriod=30 Feb 28 03:58:48 crc kubenswrapper[4624]: I0228 03:58:48.954913 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 28 03:58:49 crc kubenswrapper[4624]: I0228 03:58:49.539985 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:58:49 crc kubenswrapper[4624]: I0228 03:58:49.540558 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:58:52 crc kubenswrapper[4624]: I0228 03:58:52.353054 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:49958->10.217.0.149:8443: read: connection reset by peer" Feb 28 03:58:52 crc kubenswrapper[4624]: I0228 03:58:52.498931 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerID="165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3" exitCode=0 Feb 28 03:58:52 crc kubenswrapper[4624]: I0228 03:58:52.499012 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerDied","Data":"165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3"} Feb 28 03:58:52 crc kubenswrapper[4624]: I0228 03:58:52.499095 4624 scope.go:117] "RemoveContainer" containerID="93a0882fc2c46a1d7ae2f33f5d46e75723aa417d1b35f6b58392edea81f40dce" Feb 28 03:58:54 crc kubenswrapper[4624]: I0228 03:58:54.146143 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:58:57 crc kubenswrapper[4624]: I0228 03:58:57.890615 4624 scope.go:117] "RemoveContainer" containerID="3bd5875b8feee7fa802c0aa6653df90bfd201a3bf80a047fe55aa1467d50327f" Feb 28 03:58:59 crc kubenswrapper[4624]: I0228 03:58:59.927197 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 03:59:04 crc kubenswrapper[4624]: I0228 03:59:04.146604 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.040459 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8jwjv"] Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.044968 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.073158 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jwjv"] Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.186594 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72p7\" (UniqueName: \"kubernetes.io/projected/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-kube-api-access-r72p7\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.186645 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-utilities\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.186727 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-catalog-content\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.289399 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72p7\" (UniqueName: \"kubernetes.io/projected/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-kube-api-access-r72p7\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.289794 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-utilities\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.289876 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-catalog-content\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.290440 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-utilities\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.290481 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-catalog-content\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.328475 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72p7\" (UniqueName: \"kubernetes.io/projected/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-kube-api-access-r72p7\") pod \"redhat-operators-8jwjv\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.388587 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:06 crc kubenswrapper[4624]: I0228 03:59:06.936210 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jwjv"] Feb 28 03:59:07 crc kubenswrapper[4624]: I0228 03:59:07.693340 4624 generic.go:334] "Generic (PLEG): container finished" podID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerID="5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4" exitCode=0 Feb 28 03:59:07 crc kubenswrapper[4624]: I0228 03:59:07.693476 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerDied","Data":"5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4"} Feb 28 03:59:07 crc kubenswrapper[4624]: I0228 03:59:07.693717 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerStarted","Data":"57399d5751e915f59b9babfbad13f8708f1b22bcd642c30fad896561f8127a6a"} Feb 28 03:59:09 crc kubenswrapper[4624]: I0228 03:59:09.356935 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:59:09 crc kubenswrapper[4624]: I0228 03:59:09.738254 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerStarted","Data":"03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1"} Feb 28 03:59:10 crc kubenswrapper[4624]: I0228 03:59:10.483362 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:59:14 crc kubenswrapper[4624]: I0228 03:59:14.146674 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b4bc59cd8-fkd4p" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 28 03:59:15 crc kubenswrapper[4624]: I0228 03:59:15.804552 4624 generic.go:334] "Generic (PLEG): container finished" podID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerID="03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1" exitCode=0 Feb 28 03:59:15 crc kubenswrapper[4624]: I0228 03:59:15.805044 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerDied","Data":"03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1"} Feb 28 03:59:16 crc kubenswrapper[4624]: I0228 03:59:16.032940 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="rabbitmq" containerID="cri-o://4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b" gracePeriod=604795 Feb 28 03:59:16 crc kubenswrapper[4624]: I0228 03:59:16.803213 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="rabbitmq" containerID="cri-o://2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28" gracePeriod=604793 Feb 28 03:59:16 crc kubenswrapper[4624]: I0228 03:59:16.818902 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerStarted","Data":"6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca"} Feb 28 03:59:16 crc kubenswrapper[4624]: I0228 03:59:16.853546 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8jwjv" podStartSLOduration=3.3147180929999998 podStartE2EDuration="11.853520538s" podCreationTimestamp="2026-02-28 03:59:05 +0000 UTC" firstStartedPulling="2026-02-28 03:59:07.697049954 +0000 UTC m=+1402.361089303" lastFinishedPulling="2026-02-28 03:59:16.235852439 +0000 UTC m=+1410.899891748" observedRunningTime="2026-02-28 03:59:16.843628044 +0000 UTC m=+1411.507667353" watchObservedRunningTime="2026-02-28 03:59:16.853520538 +0000 UTC m=+1411.517559847" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.440586 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.544335 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.544418 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.544491 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.549387 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.549481 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133" gracePeriod=600 Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.554771 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca1103dd-2624-40c7-9cc4-cf55c51633a2-logs\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.555708 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-tls-certs\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.555966 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-config-data\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.556791 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-combined-ca-bundle\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.556835 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdpxz\" (UniqueName: \"kubernetes.io/projected/ca1103dd-2624-40c7-9cc4-cf55c51633a2-kube-api-access-zdpxz\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.556875 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-secret-key\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.556934 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-scripts\") pod \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\" (UID: \"ca1103dd-2624-40c7-9cc4-cf55c51633a2\") " Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.560523 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca1103dd-2624-40c7-9cc4-cf55c51633a2-logs" (OuterVolumeSpecName: "logs") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.588538 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca1103dd-2624-40c7-9cc4-cf55c51633a2-kube-api-access-zdpxz" (OuterVolumeSpecName: "kube-api-access-zdpxz") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "kube-api-access-zdpxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.594036 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.608529 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-scripts" (OuterVolumeSpecName: "scripts") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.611577 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.633918 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: E0228 03:59:19.647276 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ccd115_f935_454b_94cc_26327d5df491.slice/crio-3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133.scope\": RecentStats: unable to find data in memory cache]" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.660950 4624 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca1103dd-2624-40c7-9cc4-cf55c51633a2-logs\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.661009 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.661023 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.661034 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdpxz\" (UniqueName: \"kubernetes.io/projected/ca1103dd-2624-40c7-9cc4-cf55c51633a2-kube-api-access-zdpxz\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.661045 4624 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ca1103dd-2624-40c7-9cc4-cf55c51633a2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.661053 4624 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.672612 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-config-data" (OuterVolumeSpecName: "config-data") pod "ca1103dd-2624-40c7-9cc4-cf55c51633a2" (UID: "ca1103dd-2624-40c7-9cc4-cf55c51633a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.762855 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca1103dd-2624-40c7-9cc4-cf55c51633a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.855553 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerID="23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2" exitCode=137 Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.855688 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerDied","Data":"23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2"} Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.855724 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4bc59cd8-fkd4p" event={"ID":"ca1103dd-2624-40c7-9cc4-cf55c51633a2","Type":"ContainerDied","Data":"4550eb09f88ea82cec387b4a41cbfcf07033a4848ee90e0a29e68c9324eb5cf9"} Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.855744 4624 scope.go:117] "RemoveContainer" containerID="165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.856129 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4bc59cd8-fkd4p" Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.866649 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133" exitCode=0 Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.866697 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133"} Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.899933 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b4bc59cd8-fkd4p"] Feb 28 03:59:19 crc kubenswrapper[4624]: I0228 03:59:19.909674 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b4bc59cd8-fkd4p"] Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.060421 4624 scope.go:117] "RemoveContainer" containerID="23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.105596 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" path="/var/lib/kubelet/pods/ca1103dd-2624-40c7-9cc4-cf55c51633a2/volumes" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.120036 4624 scope.go:117] "RemoveContainer" containerID="165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3" Feb 28 03:59:20 crc kubenswrapper[4624]: E0228 03:59:20.121563 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3\": container with ID starting with 165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3 not found: ID does not exist" containerID="165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.121617 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3"} err="failed to get container status \"165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3\": rpc error: code = NotFound desc = could not find container \"165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3\": container with ID starting with 165e5f8d858b7c5d23ad73c659a2b1a88413729498b5a841df08ed21c4cb5ff3 not found: ID does not exist" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.121659 4624 scope.go:117] "RemoveContainer" containerID="23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2" Feb 28 03:59:20 crc kubenswrapper[4624]: E0228 03:59:20.122301 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2\": container with ID starting with 23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2 not found: ID does not exist" containerID="23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.122356 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2"} err="failed to get container status \"23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2\": rpc error: code = NotFound desc = could not find container \"23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2\": container with ID starting with 23fc728a3bcba8cb8ad6933f2f4f10b574cb26c922b92f9359842699b64984f2 not found: ID does not exist" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.122389 4624 scope.go:117] "RemoveContainer" containerID="59b30a81dc689f74ba07a6866eb43af4d862d6a65c377ecf21944e761adfa908" Feb 28 03:59:20 crc kubenswrapper[4624]: I0228 03:59:20.885498 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a"} Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.855501 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944469 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx6kd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-kube-api-access-rx6kd\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944550 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-erlang-cookie\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944588 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-pod-info\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944643 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-confd\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944677 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-config-data\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944721 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-plugins\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944745 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-erlang-cookie-secret\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944804 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944842 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-server-conf\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944879 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-plugins-conf\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.944930 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-tls\") pod \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\" (UID: \"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946\") " Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.951146 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.952116 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.953046 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.966711 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-pod-info" (OuterVolumeSpecName: "pod-info") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.966745 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-kube-api-access-rx6kd" (OuterVolumeSpecName: "kube-api-access-rx6kd") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "kube-api-access-rx6kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.968355 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.969506 4624 generic.go:334] "Generic (PLEG): container finished" podID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerID="4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b" exitCode=0 Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.969565 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946","Type":"ContainerDied","Data":"4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b"} Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.969607 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fc13b81-9ecc-4b66-abbd-98c7e4e1c946","Type":"ContainerDied","Data":"e5d2e92cea7d4e46871c773c025cba9f645fd75ef27bbea71f7bc01664f3b3b1"} Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.969627 4624 scope.go:117] "RemoveContainer" containerID="4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.969840 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:22 crc kubenswrapper[4624]: I0228 03:59:22.970354 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.009267 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.043012 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-config-data" (OuterVolumeSpecName: "config-data") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.048257 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.048306 4624 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.048376 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.048392 4624 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.056309 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.056373 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx6kd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-kube-api-access-rx6kd\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.056393 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.056406 4624 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.056419 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.120047 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.120000 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-server-conf" (OuterVolumeSpecName: "server-conf") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.169408 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.169442 4624 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.255673 4624 scope.go:117] "RemoveContainer" containerID="95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.330693 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" (UID: "4fc13b81-9ecc-4b66-abbd-98c7e4e1c946"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.334044 4624 scope.go:117] "RemoveContainer" containerID="4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.334793 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b\": container with ID starting with 4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b not found: ID does not exist" containerID="4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.335156 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b"} err="failed to get container status \"4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b\": rpc error: code = NotFound desc = could not find container \"4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b\": container with ID starting with 4d3b860f1ab0b701027cf5727987c8c6d605f4a46e91b96e39928246e3c8e63b not found: ID does not exist" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.335186 4624 scope.go:117] "RemoveContainer" containerID="95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.335544 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5\": container with ID starting with 95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5 not found: ID does not exist" containerID="95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.335566 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5"} err="failed to get container status \"95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5\": rpc error: code = NotFound desc = could not find container \"95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5\": container with ID starting with 95a750d69a12d98418235b8a8114aff4bd716564169a6513765262f25e48caf5 not found: ID does not exist" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.375512 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.599577 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.620960 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.629507 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.671871 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672430 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672450 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672474 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="setup-container" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672483 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="setup-container" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672492 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672500 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672512 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="rabbitmq" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672518 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="rabbitmq" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672534 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="rabbitmq" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672541 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="rabbitmq" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672549 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672557 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672568 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672574 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672591 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="setup-container" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672596 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="setup-container" Feb 28 03:59:23 crc kubenswrapper[4624]: E0228 03:59:23.672611 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon-log" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672617 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon-log" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672793 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672806 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672813 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be4f891-f796-4d4b-b916-e669037f474a" containerName="rabbitmq" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672827 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon-log" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672839 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" containerName="rabbitmq" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.672855 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.673214 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1103dd-2624-40c7-9cc4-cf55c51633a2" containerName="horizon" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.673934 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.680445 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.681039 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.688365 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.688679 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pqtbp" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.688855 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.689141 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.689772 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782496 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-confd\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782605 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-erlang-cookie\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782638 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-server-conf\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782665 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-tls\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782743 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-plugins\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782791 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-config-data\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782885 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhx7\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-kube-api-access-fvhx7\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782924 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4be4f891-f796-4d4b-b916-e669037f474a-erlang-cookie-secret\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.782959 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4be4f891-f796-4d4b-b916-e669037f474a-pod-info\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783023 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-plugins-conf\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783043 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4be4f891-f796-4d4b-b916-e669037f474a\" (UID: \"4be4f891-f796-4d4b-b916-e669037f474a\") " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783360 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00b8a2b6-f64c-452c-ac93-00422b339f64-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783395 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783416 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783437 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783484 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783524 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783551 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783711 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2th7k\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-kube-api-access-2th7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783757 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783786 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783799 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783902 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00b8a2b6-f64c-452c-ac93-00422b339f64-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.783975 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.784317 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.784332 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.787318 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.797251 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be4f891-f796-4d4b-b916-e669037f474a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.798252 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.798374 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-kube-api-access-fvhx7" (OuterVolumeSpecName: "kube-api-access-fvhx7") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "kube-api-access-fvhx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.804485 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4be4f891-f796-4d4b-b916-e669037f474a-pod-info" (OuterVolumeSpecName: "pod-info") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.827638 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.830369 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.855103 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-server-conf" (OuterVolumeSpecName: "server-conf") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.866303 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-config-data" (OuterVolumeSpecName: "config-data") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.886807 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2th7k\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-kube-api-access-2th7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887257 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887386 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887483 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00b8a2b6-f64c-452c-ac93-00422b339f64-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887590 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887751 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00b8a2b6-f64c-452c-ac93-00422b339f64-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887883 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.887991 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.888575 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.888726 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.888896 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889074 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvhx7\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-kube-api-access-fvhx7\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889276 4624 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4be4f891-f796-4d4b-b916-e669037f474a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889353 4624 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4be4f891-f796-4d4b-b916-e669037f474a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889431 4624 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889519 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889637 4624 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889723 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.889799 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4be4f891-f796-4d4b-b916-e669037f474a-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.890351 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.891544 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.891661 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.893126 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00b8a2b6-f64c-452c-ac93-00422b339f64-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.893409 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.894781 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.902876 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.907142 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00b8a2b6-f64c-452c-ac93-00422b339f64-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.910827 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00b8a2b6-f64c-452c-ac93-00422b339f64-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.915397 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2th7k\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-kube-api-access-2th7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.926750 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00b8a2b6-f64c-452c-ac93-00422b339f64-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.930950 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.970690 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"00b8a2b6-f64c-452c-ac93-00422b339f64\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.994065 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.994484 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4be4f891-f796-4d4b-b916-e669037f474a" (UID: "4be4f891-f796-4d4b-b916-e669037f474a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.997658 4624 generic.go:334] "Generic (PLEG): container finished" podID="4be4f891-f796-4d4b-b916-e669037f474a" containerID="2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28" exitCode=0 Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.997774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4be4f891-f796-4d4b-b916-e669037f474a","Type":"ContainerDied","Data":"2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28"} Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.997888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4be4f891-f796-4d4b-b916-e669037f474a","Type":"ContainerDied","Data":"71a6843db037f3b2af3c2b6a89953850ec0d00eb3d1c841391debcadc1af077a"} Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.998039 4624 scope.go:117] "RemoveContainer" containerID="2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28" Feb 28 03:59:23 crc kubenswrapper[4624]: I0228 03:59:23.998076 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.003010 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.049358 4624 scope.go:117] "RemoveContainer" containerID="8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.060936 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.084223 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.099509 4624 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4be4f891-f796-4d4b-b916-e669037f474a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.120481 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be4f891-f796-4d4b-b916-e669037f474a" path="/var/lib/kubelet/pods/4be4f891-f796-4d4b-b916-e669037f474a/volumes" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.121546 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc13b81-9ecc-4b66-abbd-98c7e4e1c946" path="/var/lib/kubelet/pods/4fc13b81-9ecc-4b66-abbd-98c7e4e1c946/volumes" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.123694 4624 scope.go:117] "RemoveContainer" containerID="2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.124273 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:59:24 crc kubenswrapper[4624]: E0228 03:59:24.125419 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28\": container with ID starting with 2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28 not found: ID does not exist" containerID="2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.125485 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28"} err="failed to get container status \"2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28\": rpc error: code = NotFound desc = could not find container \"2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28\": container with ID starting with 2aadf038dad364acd2b88229f86bc53aefb0660d305ea4ca9d0a1f8d79872d28 not found: ID does not exist" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.125520 4624 scope.go:117] "RemoveContainer" containerID="8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.127123 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.127256 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: E0228 03:59:24.130353 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0\": container with ID starting with 8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0 not found: ID does not exist" containerID="8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.130386 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0"} err="failed to get container status \"8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0\": rpc error: code = NotFound desc = could not find container \"8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0\": container with ID starting with 8f592f9fc47ba6210f13a46f7c2821fdfea542ef63c692ddaa63d9690ecda1c0 not found: ID does not exist" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.148379 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jbx89" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.148641 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.148779 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.148751 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.149052 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.149437 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.149678 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.307691 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308210 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308254 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308385 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308417 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308440 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308492 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308509 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308598 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvr9\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-kube-api-access-9lvr9\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.308620 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-config-data\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.410812 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411302 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lvr9\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-kube-api-access-9lvr9\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411337 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-config-data\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411366 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411389 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411425 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411483 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411515 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411535 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411574 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.411594 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.412187 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.414498 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.414817 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.415330 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.415898 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-config-data\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.416741 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.447879 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.459728 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.459928 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.460282 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.502803 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.518418 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lvr9\" (UniqueName: \"kubernetes.io/projected/03d202d9-cd01-4f0c-b7dc-9e89a7676c65-kube-api-access-9lvr9\") pod \"rabbitmq-server-0\" (UID: \"03d202d9-cd01-4f0c-b7dc-9e89a7676c65\") " pod="openstack/rabbitmq-server-0" Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.520386 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 03:59:24 crc kubenswrapper[4624]: I0228 03:59:24.766521 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.096601 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"00b8a2b6-f64c-452c-ac93-00422b339f64","Type":"ContainerStarted","Data":"d6252184deff8581a197264324922e3f7aba85a3af36c38bfdc05711e6d10d57"} Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.308870 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 03:59:25 crc kubenswrapper[4624]: W0228 03:59:25.309663 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03d202d9_cd01_4f0c_b7dc_9e89a7676c65.slice/crio-57d8768c4d942e45e16f14fc4b97d26e510ea8da0e00b4762435c9e051ea0e2f WatchSource:0}: Error finding container 57d8768c4d942e45e16f14fc4b97d26e510ea8da0e00b4762435c9e051ea0e2f: Status 404 returned error can't find the container with id 57d8768c4d942e45e16f14fc4b97d26e510ea8da0e00b4762435c9e051ea0e2f Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.500673 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-v5n48"] Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.502769 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.507416 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.539446 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-v5n48"] Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659250 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-config\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659301 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659345 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659409 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659424 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659475 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfrqk\" (UniqueName: \"kubernetes.io/projected/98165b6f-5ef8-4e36-9edc-58f677b6110a-kube-api-access-tfrqk\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.659507 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-svc\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.761009 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.762101 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.762310 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.762906 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.762340 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.763062 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfrqk\" (UniqueName: \"kubernetes.io/projected/98165b6f-5ef8-4e36-9edc-58f677b6110a-kube-api-access-tfrqk\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.763366 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.763639 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-svc\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.764251 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-svc\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.764489 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-config\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.764513 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.765146 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-config\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.765243 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.782863 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfrqk\" (UniqueName: \"kubernetes.io/projected/98165b6f-5ef8-4e36-9edc-58f677b6110a-kube-api-access-tfrqk\") pod \"dnsmasq-dns-5576978c7c-v5n48\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:25 crc kubenswrapper[4624]: I0228 03:59:25.903970 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:26 crc kubenswrapper[4624]: I0228 03:59:26.153778 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03d202d9-cd01-4f0c-b7dc-9e89a7676c65","Type":"ContainerStarted","Data":"57d8768c4d942e45e16f14fc4b97d26e510ea8da0e00b4762435c9e051ea0e2f"} Feb 28 03:59:26 crc kubenswrapper[4624]: I0228 03:59:26.219744 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-v5n48"] Feb 28 03:59:26 crc kubenswrapper[4624]: I0228 03:59:26.389127 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:26 crc kubenswrapper[4624]: I0228 03:59:26.391849 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:27 crc kubenswrapper[4624]: I0228 03:59:27.166753 4624 generic.go:334] "Generic (PLEG): container finished" podID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerID="72b0cdd5b69faa7a03ef70200b7daea4cef59b1962de5f0f97ae5648230bca30" exitCode=0 Feb 28 03:59:27 crc kubenswrapper[4624]: I0228 03:59:27.166847 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" event={"ID":"98165b6f-5ef8-4e36-9edc-58f677b6110a","Type":"ContainerDied","Data":"72b0cdd5b69faa7a03ef70200b7daea4cef59b1962de5f0f97ae5648230bca30"} Feb 28 03:59:27 crc kubenswrapper[4624]: I0228 03:59:27.166911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" event={"ID":"98165b6f-5ef8-4e36-9edc-58f677b6110a","Type":"ContainerStarted","Data":"87c1f16b8b53f3c93bd0cc14859e4608e2abb4718a0ad3b185cf9bcb9ef49bc3"} Feb 28 03:59:27 crc kubenswrapper[4624]: I0228 03:59:27.170699 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"00b8a2b6-f64c-452c-ac93-00422b339f64","Type":"ContainerStarted","Data":"7217a1ad389fb9161753fc86363cfb85051aa1873cb11461731b1d7aa784b0a1"} Feb 28 03:59:27 crc kubenswrapper[4624]: I0228 03:59:27.175670 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03d202d9-cd01-4f0c-b7dc-9e89a7676c65","Type":"ContainerStarted","Data":"63138705dff26ac39396eef47e5efa2e26f87ae1ebb98c10e83924e964a7db2b"} Feb 28 03:59:27 crc kubenswrapper[4624]: I0228 03:59:27.460885 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jwjv" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" probeResult="failure" output=< Feb 28 03:59:27 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:59:27 crc kubenswrapper[4624]: > Feb 28 03:59:28 crc kubenswrapper[4624]: I0228 03:59:28.193437 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" event={"ID":"98165b6f-5ef8-4e36-9edc-58f677b6110a","Type":"ContainerStarted","Data":"0ba3923f541dc3227c165ac4f097f35fb2e8257457998c1486a0ab4e524e0ad2"} Feb 28 03:59:28 crc kubenswrapper[4624]: I0228 03:59:28.193786 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:28 crc kubenswrapper[4624]: I0228 03:59:28.228566 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" podStartSLOduration=3.228535743 podStartE2EDuration="3.228535743s" podCreationTimestamp="2026-02-28 03:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:59:28.213891533 +0000 UTC m=+1422.877930842" watchObservedRunningTime="2026-02-28 03:59:28.228535743 +0000 UTC m=+1422.892575062" Feb 28 03:59:35 crc kubenswrapper[4624]: I0228 03:59:35.906456 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:35 crc kubenswrapper[4624]: I0228 03:59:35.999910 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-lfdh8"] Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.003450 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerName="dnsmasq-dns" containerID="cri-o://2afa3009d79d90b98db31ea963dab79228bf4287abf95a2bcfddb50176b7fa8a" gracePeriod=10 Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.291524 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56f7ccd8f7-zth9s"] Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.293341 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.315283 4624 generic.go:334] "Generic (PLEG): container finished" podID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerID="2afa3009d79d90b98db31ea963dab79228bf4287abf95a2bcfddb50176b7fa8a" exitCode=0 Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.315390 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" event={"ID":"31f2aae0-acdc-4c37-bb36-e7685b9dfe42","Type":"ContainerDied","Data":"2afa3009d79d90b98db31ea963dab79228bf4287abf95a2bcfddb50176b7fa8a"} Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.336854 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f7ccd8f7-zth9s"] Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511102 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-dns-svc\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511525 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbw8\" (UniqueName: \"kubernetes.io/projected/4b13e83c-72f7-4925-abc6-1e284917cb66-kube-api-access-8pbw8\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511553 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-ovsdbserver-nb\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511574 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-openstack-edpm-ipam\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511608 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-config\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511673 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-ovsdbserver-sb\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.511730 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-dns-swift-storage-0\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.609570 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.613764 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-config\") pod \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.613846 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-sb\") pod \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.613890 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j6j2\" (UniqueName: \"kubernetes.io/projected/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-kube-api-access-8j6j2\") pod \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.613941 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-svc\") pod \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.613986 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-swift-storage-0\") pod \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614171 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-openstack-edpm-ipam\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614248 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-config\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614463 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-ovsdbserver-sb\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614635 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-dns-swift-storage-0\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614717 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-dns-svc\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614739 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbw8\" (UniqueName: \"kubernetes.io/projected/4b13e83c-72f7-4925-abc6-1e284917cb66-kube-api-access-8pbw8\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.614760 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-ovsdbserver-nb\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.616773 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-config\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.616876 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-ovsdbserver-sb\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.617246 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-dns-svc\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.617643 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-openstack-edpm-ipam\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.621738 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-dns-swift-storage-0\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.621968 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b13e83c-72f7-4925-abc6-1e284917cb66-ovsdbserver-nb\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.665538 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-kube-api-access-8j6j2" (OuterVolumeSpecName: "kube-api-access-8j6j2") pod "31f2aae0-acdc-4c37-bb36-e7685b9dfe42" (UID: "31f2aae0-acdc-4c37-bb36-e7685b9dfe42"). InnerVolumeSpecName "kube-api-access-8j6j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.689367 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbw8\" (UniqueName: \"kubernetes.io/projected/4b13e83c-72f7-4925-abc6-1e284917cb66-kube-api-access-8pbw8\") pod \"dnsmasq-dns-56f7ccd8f7-zth9s\" (UID: \"4b13e83c-72f7-4925-abc6-1e284917cb66\") " pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.722379 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-nb\") pod \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\" (UID: \"31f2aae0-acdc-4c37-bb36-e7685b9dfe42\") " Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.723422 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j6j2\" (UniqueName: \"kubernetes.io/projected/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-kube-api-access-8j6j2\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.733780 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "31f2aae0-acdc-4c37-bb36-e7685b9dfe42" (UID: "31f2aae0-acdc-4c37-bb36-e7685b9dfe42"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.739702 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31f2aae0-acdc-4c37-bb36-e7685b9dfe42" (UID: "31f2aae0-acdc-4c37-bb36-e7685b9dfe42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.740069 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31f2aae0-acdc-4c37-bb36-e7685b9dfe42" (UID: "31f2aae0-acdc-4c37-bb36-e7685b9dfe42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.771798 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-config" (OuterVolumeSpecName: "config") pod "31f2aae0-acdc-4c37-bb36-e7685b9dfe42" (UID: "31f2aae0-acdc-4c37-bb36-e7685b9dfe42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.799561 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31f2aae0-acdc-4c37-bb36-e7685b9dfe42" (UID: "31f2aae0-acdc-4c37-bb36-e7685b9dfe42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.826657 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.826739 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.826750 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.826762 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.826773 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31f2aae0-acdc-4c37-bb36-e7685b9dfe42-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:36 crc kubenswrapper[4624]: I0228 03:59:36.944291 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.332585 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" event={"ID":"31f2aae0-acdc-4c37-bb36-e7685b9dfe42","Type":"ContainerDied","Data":"5653e232546713383d8fc126cf64eaa43b721361ed022d2d06d5aa03171b78e5"} Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.333001 4624 scope.go:117] "RemoveContainer" containerID="2afa3009d79d90b98db31ea963dab79228bf4287abf95a2bcfddb50176b7fa8a" Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.332724 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-lfdh8" Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.368995 4624 scope.go:117] "RemoveContainer" containerID="89ae84bfaa7a8d8e9f40ad2ff7fcba567cc06abb8194fe80160e54ff60a7e5e3" Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.418466 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-lfdh8"] Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.435608 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-lfdh8"] Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.469601 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jwjv" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" probeResult="failure" output=< Feb 28 03:59:37 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:59:37 crc kubenswrapper[4624]: > Feb 28 03:59:37 crc kubenswrapper[4624]: I0228 03:59:37.488920 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56f7ccd8f7-zth9s"] Feb 28 03:59:37 crc kubenswrapper[4624]: W0228 03:59:37.490645 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b13e83c_72f7_4925_abc6_1e284917cb66.slice/crio-05b3e6508c0924aba805d2f7bfaf52f175765f7d55a16955a05fddb6ce5077e4 WatchSource:0}: Error finding container 05b3e6508c0924aba805d2f7bfaf52f175765f7d55a16955a05fddb6ce5077e4: Status 404 returned error can't find the container with id 05b3e6508c0924aba805d2f7bfaf52f175765f7d55a16955a05fddb6ce5077e4 Feb 28 03:59:38 crc kubenswrapper[4624]: I0228 03:59:38.102147 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" path="/var/lib/kubelet/pods/31f2aae0-acdc-4c37-bb36-e7685b9dfe42/volumes" Feb 28 03:59:38 crc kubenswrapper[4624]: I0228 03:59:38.343865 4624 generic.go:334] "Generic (PLEG): container finished" podID="4b13e83c-72f7-4925-abc6-1e284917cb66" containerID="4b910ad8085071f1e9460084d8921ba38eb28fbc959e3b283d515b9acfa51f4f" exitCode=0 Feb 28 03:59:38 crc kubenswrapper[4624]: I0228 03:59:38.343953 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" event={"ID":"4b13e83c-72f7-4925-abc6-1e284917cb66","Type":"ContainerDied","Data":"4b910ad8085071f1e9460084d8921ba38eb28fbc959e3b283d515b9acfa51f4f"} Feb 28 03:59:38 crc kubenswrapper[4624]: I0228 03:59:38.344455 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" event={"ID":"4b13e83c-72f7-4925-abc6-1e284917cb66","Type":"ContainerStarted","Data":"05b3e6508c0924aba805d2f7bfaf52f175765f7d55a16955a05fddb6ce5077e4"} Feb 28 03:59:39 crc kubenswrapper[4624]: I0228 03:59:39.368319 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" event={"ID":"4b13e83c-72f7-4925-abc6-1e284917cb66","Type":"ContainerStarted","Data":"c90049219585e4ebb98ebd8f8571835cf934ffc565ef5223cef3c4f4ec136620"} Feb 28 03:59:39 crc kubenswrapper[4624]: I0228 03:59:39.368753 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:39 crc kubenswrapper[4624]: I0228 03:59:39.405594 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" podStartSLOduration=3.405562775 podStartE2EDuration="3.405562775s" podCreationTimestamp="2026-02-28 03:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 03:59:39.399599944 +0000 UTC m=+1434.063639263" watchObservedRunningTime="2026-02-28 03:59:39.405562775 +0000 UTC m=+1434.069602094" Feb 28 03:59:46 crc kubenswrapper[4624]: I0228 03:59:46.946466 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56f7ccd8f7-zth9s" Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.048507 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-v5n48"] Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.049075 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerName="dnsmasq-dns" containerID="cri-o://0ba3923f541dc3227c165ac4f097f35fb2e8257457998c1486a0ab4e524e0ad2" gracePeriod=10 Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.435890 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jwjv" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" probeResult="failure" output=< Feb 28 03:59:47 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 03:59:47 crc kubenswrapper[4624]: > Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.541494 4624 generic.go:334] "Generic (PLEG): container finished" podID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerID="0ba3923f541dc3227c165ac4f097f35fb2e8257457998c1486a0ab4e524e0ad2" exitCode=0 Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.541902 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" event={"ID":"98165b6f-5ef8-4e36-9edc-58f677b6110a","Type":"ContainerDied","Data":"0ba3923f541dc3227c165ac4f097f35fb2e8257457998c1486a0ab4e524e0ad2"} Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.843941 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.979769 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-config\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.979812 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-openstack-edpm-ipam\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.980042 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfrqk\" (UniqueName: \"kubernetes.io/projected/98165b6f-5ef8-4e36-9edc-58f677b6110a-kube-api-access-tfrqk\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.980073 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-svc\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.980116 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-swift-storage-0\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.980146 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-sb\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.980193 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-nb\") pod \"98165b6f-5ef8-4e36-9edc-58f677b6110a\" (UID: \"98165b6f-5ef8-4e36-9edc-58f677b6110a\") " Feb 28 03:59:47 crc kubenswrapper[4624]: I0228 03:59:47.999423 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98165b6f-5ef8-4e36-9edc-58f677b6110a-kube-api-access-tfrqk" (OuterVolumeSpecName: "kube-api-access-tfrqk") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "kube-api-access-tfrqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.058055 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.074394 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.077670 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.083359 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfrqk\" (UniqueName: \"kubernetes.io/projected/98165b6f-5ef8-4e36-9edc-58f677b6110a-kube-api-access-tfrqk\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.083385 4624 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.083397 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.083409 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.098608 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.102956 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-config" (OuterVolumeSpecName: "config") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.105889 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98165b6f-5ef8-4e36-9edc-58f677b6110a" (UID: "98165b6f-5ef8-4e36-9edc-58f677b6110a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.189230 4624 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.189276 4624 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.189291 4624 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98165b6f-5ef8-4e36-9edc-58f677b6110a-config\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.559581 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" event={"ID":"98165b6f-5ef8-4e36-9edc-58f677b6110a","Type":"ContainerDied","Data":"87c1f16b8b53f3c93bd0cc14859e4608e2abb4718a0ad3b185cf9bcb9ef49bc3"} Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.560204 4624 scope.go:117] "RemoveContainer" containerID="0ba3923f541dc3227c165ac4f097f35fb2e8257457998c1486a0ab4e524e0ad2" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.559733 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-v5n48" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.587839 4624 scope.go:117] "RemoveContainer" containerID="72b0cdd5b69faa7a03ef70200b7daea4cef59b1962de5f0f97ae5648230bca30" Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.595923 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-v5n48"] Feb 28 03:59:48 crc kubenswrapper[4624]: I0228 03:59:48.609978 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-v5n48"] Feb 28 03:59:50 crc kubenswrapper[4624]: I0228 03:59:50.122807 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" path="/var/lib/kubelet/pods/98165b6f-5ef8-4e36-9edc-58f677b6110a/volumes" Feb 28 03:59:56 crc kubenswrapper[4624]: I0228 03:59:56.452449 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:56 crc kubenswrapper[4624]: I0228 03:59:56.540533 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:56 crc kubenswrapper[4624]: I0228 03:59:56.727797 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jwjv"] Feb 28 03:59:57 crc kubenswrapper[4624]: I0228 03:59:57.651648 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8jwjv" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" containerID="cri-o://6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca" gracePeriod=2 Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.112306 4624 scope.go:117] "RemoveContainer" containerID="2d9743f89f0f5c87640465c946080642bd0c5cd805b04608a130e5648e4da90d" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.154745 4624 scope.go:117] "RemoveContainer" containerID="8360966c4c8aad01127f1c7d51a9c5d9a660785d6abffb67a53ac7c0c40ac626" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.266888 4624 scope.go:117] "RemoveContainer" containerID="9c968899766310fc36273a2661308455478e9e58375c07fb3ca1cf3e5e1068f8" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.310074 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.329069 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r72p7\" (UniqueName: \"kubernetes.io/projected/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-kube-api-access-r72p7\") pod \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.329280 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-utilities\") pod \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.329370 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-catalog-content\") pod \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\" (UID: \"4d52f3b5-128c-4fd5-bb50-500ca62ff71d\") " Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.333494 4624 scope.go:117] "RemoveContainer" containerID="64445ae7b528324fab9b7bff71b59add9aa1376f210ed59a5d97a77123c50746" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.334556 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-utilities" (OuterVolumeSpecName: "utilities") pod "4d52f3b5-128c-4fd5-bb50-500ca62ff71d" (UID: "4d52f3b5-128c-4fd5-bb50-500ca62ff71d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.342486 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-kube-api-access-r72p7" (OuterVolumeSpecName: "kube-api-access-r72p7") pod "4d52f3b5-128c-4fd5-bb50-500ca62ff71d" (UID: "4d52f3b5-128c-4fd5-bb50-500ca62ff71d"). InnerVolumeSpecName "kube-api-access-r72p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.396056 4624 scope.go:117] "RemoveContainer" containerID="772af69066771ce1f1d37e96334fecb39c7f80fd096300cb0b8cceef5c5486b2" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.431056 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r72p7\" (UniqueName: \"kubernetes.io/projected/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-kube-api-access-r72p7\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.431328 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.460223 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d52f3b5-128c-4fd5-bb50-500ca62ff71d" (UID: "4d52f3b5-128c-4fd5-bb50-500ca62ff71d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.533070 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d52f3b5-128c-4fd5-bb50-500ca62ff71d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.669171 4624 generic.go:334] "Generic (PLEG): container finished" podID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerID="6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca" exitCode=0 Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.669240 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerDied","Data":"6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca"} Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.669254 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwjv" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.669284 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwjv" event={"ID":"4d52f3b5-128c-4fd5-bb50-500ca62ff71d","Type":"ContainerDied","Data":"57399d5751e915f59b9babfbad13f8708f1b22bcd642c30fad896561f8127a6a"} Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.669315 4624 scope.go:117] "RemoveContainer" containerID="6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.700146 4624 scope.go:117] "RemoveContainer" containerID="03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.778951 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jwjv"] Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.793606 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8jwjv"] Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.803591 4624 scope.go:117] "RemoveContainer" containerID="5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.842538 4624 scope.go:117] "RemoveContainer" containerID="6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca" Feb 28 03:59:58 crc kubenswrapper[4624]: E0228 03:59:58.843435 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca\": container with ID starting with 6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca not found: ID does not exist" containerID="6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.843472 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca"} err="failed to get container status \"6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca\": rpc error: code = NotFound desc = could not find container \"6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca\": container with ID starting with 6493ecde9fd3580258d098fa0a83495dce983200c05f800cb43d9cbcfba518ca not found: ID does not exist" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.843512 4624 scope.go:117] "RemoveContainer" containerID="03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1" Feb 28 03:59:58 crc kubenswrapper[4624]: E0228 03:59:58.843790 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1\": container with ID starting with 03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1 not found: ID does not exist" containerID="03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.843831 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1"} err="failed to get container status \"03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1\": rpc error: code = NotFound desc = could not find container \"03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1\": container with ID starting with 03beadab728bec823ca5fd108c35c2181b3a73448a60bb1ce60fcee2cc72d0a1 not found: ID does not exist" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.843843 4624 scope.go:117] "RemoveContainer" containerID="5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4" Feb 28 03:59:58 crc kubenswrapper[4624]: E0228 03:59:58.844213 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4\": container with ID starting with 5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4 not found: ID does not exist" containerID="5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4" Feb 28 03:59:58 crc kubenswrapper[4624]: I0228 03:59:58.844234 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4"} err="failed to get container status \"5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4\": rpc error: code = NotFound desc = could not find container \"5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4\": container with ID starting with 5010f657b8c085b07bc015bc38dba1c21e4ea41a267325c739648ffa5d324ce4 not found: ID does not exist" Feb 28 03:59:59 crc kubenswrapper[4624]: I0228 03:59:59.682543 4624 generic.go:334] "Generic (PLEG): container finished" podID="00b8a2b6-f64c-452c-ac93-00422b339f64" containerID="7217a1ad389fb9161753fc86363cfb85051aa1873cb11461731b1d7aa784b0a1" exitCode=0 Feb 28 03:59:59 crc kubenswrapper[4624]: I0228 03:59:59.682620 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"00b8a2b6-f64c-452c-ac93-00422b339f64","Type":"ContainerDied","Data":"7217a1ad389fb9161753fc86363cfb85051aa1873cb11461731b1d7aa784b0a1"} Feb 28 03:59:59 crc kubenswrapper[4624]: I0228 03:59:59.685029 4624 generic.go:334] "Generic (PLEG): container finished" podID="03d202d9-cd01-4f0c-b7dc-9e89a7676c65" containerID="63138705dff26ac39396eef47e5efa2e26f87ae1ebb98c10e83924e964a7db2b" exitCode=0 Feb 28 03:59:59 crc kubenswrapper[4624]: I0228 03:59:59.685130 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03d202d9-cd01-4f0c-b7dc-9e89a7676c65","Type":"ContainerDied","Data":"63138705dff26ac39396eef47e5efa2e26f87ae1ebb98c10e83924e964a7db2b"} Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.115959 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" path="/var/lib/kubelet/pods/4d52f3b5-128c-4fd5-bb50-500ca62ff71d/volumes" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.181568 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx"] Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182144 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182168 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182191 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerName="dnsmasq-dns" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182200 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerName="dnsmasq-dns" Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182219 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerName="dnsmasq-dns" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182226 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerName="dnsmasq-dns" Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182253 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="extract-utilities" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182260 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="extract-utilities" Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182268 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerName="init" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182273 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerName="init" Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182284 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerName="init" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182291 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerName="init" Feb 28 04:00:00 crc kubenswrapper[4624]: E0228 04:00:00.182299 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="extract-content" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182305 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="extract-content" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182488 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d52f3b5-128c-4fd5-bb50-500ca62ff71d" containerName="registry-server" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182512 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f2aae0-acdc-4c37-bb36-e7685b9dfe42" containerName="dnsmasq-dns" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.182523 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="98165b6f-5ef8-4e36-9edc-58f677b6110a" containerName="dnsmasq-dns" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.183341 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.187514 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.187615 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.201586 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537520-zqh2r"] Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.203211 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.206421 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.206676 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.206839 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.224416 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx"] Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.232569 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537520-zqh2r"] Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.281620 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kbc5\" (UniqueName: \"kubernetes.io/projected/2febede5-9921-477c-94f4-496013e7274a-kube-api-access-2kbc5\") pod \"auto-csr-approver-29537520-zqh2r\" (UID: \"2febede5-9921-477c-94f4-496013e7274a\") " pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.281701 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-config-volume\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.281757 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgwb\" (UniqueName: \"kubernetes.io/projected/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-kube-api-access-vjgwb\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.281903 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-secret-volume\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.384574 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-secret-volume\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.384648 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kbc5\" (UniqueName: \"kubernetes.io/projected/2febede5-9921-477c-94f4-496013e7274a-kube-api-access-2kbc5\") pod \"auto-csr-approver-29537520-zqh2r\" (UID: \"2febede5-9921-477c-94f4-496013e7274a\") " pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.384679 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-config-volume\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.384721 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgwb\" (UniqueName: \"kubernetes.io/projected/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-kube-api-access-vjgwb\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.386053 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-config-volume\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.391785 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-secret-volume\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.404905 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kbc5\" (UniqueName: \"kubernetes.io/projected/2febede5-9921-477c-94f4-496013e7274a-kube-api-access-2kbc5\") pod \"auto-csr-approver-29537520-zqh2r\" (UID: \"2febede5-9921-477c-94f4-496013e7274a\") " pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.416104 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgwb\" (UniqueName: \"kubernetes.io/projected/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-kube-api-access-vjgwb\") pod \"collect-profiles-29537520-6mkpx\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.545863 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.563731 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.738187 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"00b8a2b6-f64c-452c-ac93-00422b339f64","Type":"ContainerStarted","Data":"98ad2e0494942f90302fe60ca7059c1127dabc1b359acb495b9b03744da47317"} Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.740239 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.750696 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03d202d9-cd01-4f0c-b7dc-9e89a7676c65","Type":"ContainerStarted","Data":"d81d4bf5e3afe247bd5b3fa02df8c9f1ae127c78cfeb46f50b59e252a4028760"} Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.751060 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.792766 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.792640788 podStartE2EDuration="37.792640788s" podCreationTimestamp="2026-02-28 03:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:00:00.77126936 +0000 UTC m=+1455.435308669" watchObservedRunningTime="2026-02-28 04:00:00.792640788 +0000 UTC m=+1455.456680107" Feb 28 04:00:00 crc kubenswrapper[4624]: I0228 04:00:00.859514 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.859487937 podStartE2EDuration="36.859487937s" podCreationTimestamp="2026-02-28 03:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:00:00.827772289 +0000 UTC m=+1455.491811598" watchObservedRunningTime="2026-02-28 04:00:00.859487937 +0000 UTC m=+1455.523527256" Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.182785 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx"] Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.290293 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537520-zqh2r"] Feb 28 04:00:01 crc kubenswrapper[4624]: W0228 04:00:01.314932 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2febede5_9921_477c_94f4_496013e7274a.slice/crio-b639e08afc0ef231cb4110d98765e290f083d64910a488ab16b1feab1c1efcf1 WatchSource:0}: Error finding container b639e08afc0ef231cb4110d98765e290f083d64910a488ab16b1feab1c1efcf1: Status 404 returned error can't find the container with id b639e08afc0ef231cb4110d98765e290f083d64910a488ab16b1feab1c1efcf1 Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.318988 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.764338 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" event={"ID":"1d0bb789-cb32-47a1-9a3d-38658ad2cb80","Type":"ContainerStarted","Data":"ad25b920c8bfb1525d2215bfa2afb7e41b7cf89958ea8514fa13403aa82364dc"} Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.764831 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" event={"ID":"1d0bb789-cb32-47a1-9a3d-38658ad2cb80","Type":"ContainerStarted","Data":"c415cacd454e6b414cf310a632c9336b2a3d36d210a051b673b4bb736c0b7ada"} Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.767705 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" event={"ID":"2febede5-9921-477c-94f4-496013e7274a","Type":"ContainerStarted","Data":"b639e08afc0ef231cb4110d98765e290f083d64910a488ab16b1feab1c1efcf1"} Feb 28 04:00:01 crc kubenswrapper[4624]: I0228 04:00:01.795415 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" podStartSLOduration=1.795393373 podStartE2EDuration="1.795393373s" podCreationTimestamp="2026-02-28 04:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:00:01.784750805 +0000 UTC m=+1456.448790124" watchObservedRunningTime="2026-02-28 04:00:01.795393373 +0000 UTC m=+1456.459432692" Feb 28 04:00:02 crc kubenswrapper[4624]: I0228 04:00:02.781066 4624 generic.go:334] "Generic (PLEG): container finished" podID="1d0bb789-cb32-47a1-9a3d-38658ad2cb80" containerID="ad25b920c8bfb1525d2215bfa2afb7e41b7cf89958ea8514fa13403aa82364dc" exitCode=0 Feb 28 04:00:02 crc kubenswrapper[4624]: I0228 04:00:02.781149 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" event={"ID":"1d0bb789-cb32-47a1-9a3d-38658ad2cb80","Type":"ContainerDied","Data":"ad25b920c8bfb1525d2215bfa2afb7e41b7cf89958ea8514fa13403aa82364dc"} Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.241750 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.289176 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-secret-volume\") pod \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.289407 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgwb\" (UniqueName: \"kubernetes.io/projected/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-kube-api-access-vjgwb\") pod \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.289522 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-config-volume\") pod \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\" (UID: \"1d0bb789-cb32-47a1-9a3d-38658ad2cb80\") " Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.290327 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d0bb789-cb32-47a1-9a3d-38658ad2cb80" (UID: "1d0bb789-cb32-47a1-9a3d-38658ad2cb80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.304198 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-kube-api-access-vjgwb" (OuterVolumeSpecName: "kube-api-access-vjgwb") pod "1d0bb789-cb32-47a1-9a3d-38658ad2cb80" (UID: "1d0bb789-cb32-47a1-9a3d-38658ad2cb80"). InnerVolumeSpecName "kube-api-access-vjgwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.307215 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d0bb789-cb32-47a1-9a3d-38658ad2cb80" (UID: "1d0bb789-cb32-47a1-9a3d-38658ad2cb80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.392118 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.392156 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.392167 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgwb\" (UniqueName: \"kubernetes.io/projected/1d0bb789-cb32-47a1-9a3d-38658ad2cb80-kube-api-access-vjgwb\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.805026 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" event={"ID":"1d0bb789-cb32-47a1-9a3d-38658ad2cb80","Type":"ContainerDied","Data":"c415cacd454e6b414cf310a632c9336b2a3d36d210a051b673b4bb736c0b7ada"} Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.805075 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c415cacd454e6b414cf310a632c9336b2a3d36d210a051b673b4bb736c0b7ada" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.805182 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.971235 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p"] Feb 28 04:00:04 crc kubenswrapper[4624]: E0228 04:00:04.972113 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0bb789-cb32-47a1-9a3d-38658ad2cb80" containerName="collect-profiles" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.972230 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0bb789-cb32-47a1-9a3d-38658ad2cb80" containerName="collect-profiles" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.972537 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0bb789-cb32-47a1-9a3d-38658ad2cb80" containerName="collect-profiles" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.973639 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.979422 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.980376 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.992308 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:00:04 crc kubenswrapper[4624]: I0228 04:00:04.992928 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.003924 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.004263 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.004368 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.004557 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj5b6\" (UniqueName: \"kubernetes.io/projected/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-kube-api-access-jj5b6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.017045 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p"] Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.106592 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj5b6\" (UniqueName: \"kubernetes.io/projected/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-kube-api-access-jj5b6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.106681 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.106758 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.106803 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.115469 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.115898 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.127292 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.142067 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj5b6\" (UniqueName: \"kubernetes.io/projected/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-kube-api-access-jj5b6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:05 crc kubenswrapper[4624]: I0228 04:00:05.324825 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:06 crc kubenswrapper[4624]: I0228 04:00:06.131624 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p"] Feb 28 04:00:06 crc kubenswrapper[4624]: I0228 04:00:06.866734 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" event={"ID":"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51","Type":"ContainerStarted","Data":"fc64ebbbde99b1618e716dda694fe9abc8822a5971d24e8da045caba19fc6fd2"} Feb 28 04:00:12 crc kubenswrapper[4624]: I0228 04:00:12.991802 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ztvjp"] Feb 28 04:00:12 crc kubenswrapper[4624]: I0228 04:00:12.995020 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.001554 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztvjp"] Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.063721 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxksb\" (UniqueName: \"kubernetes.io/projected/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-kube-api-access-zxksb\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.063809 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-catalog-content\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.063835 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-utilities\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.166430 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxksb\" (UniqueName: \"kubernetes.io/projected/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-kube-api-access-zxksb\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.166516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-catalog-content\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.166550 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-utilities\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.167586 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-catalog-content\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.167614 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-utilities\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.202816 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxksb\" (UniqueName: \"kubernetes.io/projected/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-kube-api-access-zxksb\") pod \"certified-operators-ztvjp\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:13 crc kubenswrapper[4624]: I0228 04:00:13.326596 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:14 crc kubenswrapper[4624]: I0228 04:00:14.008506 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:00:14 crc kubenswrapper[4624]: I0228 04:00:14.770376 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 04:00:18 crc kubenswrapper[4624]: I0228 04:00:18.357961 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztvjp"] Feb 28 04:00:19 crc kubenswrapper[4624]: I0228 04:00:19.023039 4624 generic.go:334] "Generic (PLEG): container finished" podID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerID="7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164" exitCode=0 Feb 28 04:00:19 crc kubenswrapper[4624]: I0228 04:00:19.023315 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerDied","Data":"7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164"} Feb 28 04:00:19 crc kubenswrapper[4624]: I0228 04:00:19.023363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerStarted","Data":"65749f1d33de5822ca4c7a30262dfd09805bce29e66f5275bd066430c5641544"} Feb 28 04:00:19 crc kubenswrapper[4624]: I0228 04:00:19.030408 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" event={"ID":"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51","Type":"ContainerStarted","Data":"a0a7e12f9082a885867e03535e606e7b272c44958dcdd3f492a23703bbfc04ce"} Feb 28 04:00:19 crc kubenswrapper[4624]: I0228 04:00:19.078077 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" podStartSLOduration=3.368779897 podStartE2EDuration="15.078051979s" podCreationTimestamp="2026-02-28 04:00:04 +0000 UTC" firstStartedPulling="2026-02-28 04:00:06.141296892 +0000 UTC m=+1460.805336201" lastFinishedPulling="2026-02-28 04:00:17.850568954 +0000 UTC m=+1472.514608283" observedRunningTime="2026-02-28 04:00:19.069661012 +0000 UTC m=+1473.733700311" watchObservedRunningTime="2026-02-28 04:00:19.078051979 +0000 UTC m=+1473.742091298" Feb 28 04:00:21 crc kubenswrapper[4624]: I0228 04:00:21.080891 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerStarted","Data":"2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455"} Feb 28 04:00:23 crc kubenswrapper[4624]: I0228 04:00:23.113880 4624 generic.go:334] "Generic (PLEG): container finished" podID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerID="2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455" exitCode=0 Feb 28 04:00:23 crc kubenswrapper[4624]: I0228 04:00:23.114042 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerDied","Data":"2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455"} Feb 28 04:00:24 crc kubenswrapper[4624]: I0228 04:00:24.130217 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" event={"ID":"2febede5-9921-477c-94f4-496013e7274a","Type":"ContainerStarted","Data":"0d183760a036fd1efb99688b4e26f3e5c0b51648fd34421c97f9c910e5084f94"} Feb 28 04:00:24 crc kubenswrapper[4624]: I0228 04:00:24.133635 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerStarted","Data":"c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241"} Feb 28 04:00:24 crc kubenswrapper[4624]: I0228 04:00:24.185917 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" podStartSLOduration=2.06767499 podStartE2EDuration="24.185895669s" podCreationTimestamp="2026-02-28 04:00:00 +0000 UTC" firstStartedPulling="2026-02-28 04:00:01.31873018 +0000 UTC m=+1455.982769489" lastFinishedPulling="2026-02-28 04:00:23.436950859 +0000 UTC m=+1478.100990168" observedRunningTime="2026-02-28 04:00:24.154021556 +0000 UTC m=+1478.818060875" watchObservedRunningTime="2026-02-28 04:00:24.185895669 +0000 UTC m=+1478.849934978" Feb 28 04:00:24 crc kubenswrapper[4624]: I0228 04:00:24.190969 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ztvjp" podStartSLOduration=7.682833528 podStartE2EDuration="12.190955885s" podCreationTimestamp="2026-02-28 04:00:12 +0000 UTC" firstStartedPulling="2026-02-28 04:00:19.027134121 +0000 UTC m=+1473.691173430" lastFinishedPulling="2026-02-28 04:00:23.535256478 +0000 UTC m=+1478.199295787" observedRunningTime="2026-02-28 04:00:24.181436727 +0000 UTC m=+1478.845476036" watchObservedRunningTime="2026-02-28 04:00:24.190955885 +0000 UTC m=+1478.854995194" Feb 28 04:00:26 crc kubenswrapper[4624]: I0228 04:00:26.161570 4624 generic.go:334] "Generic (PLEG): container finished" podID="2febede5-9921-477c-94f4-496013e7274a" containerID="0d183760a036fd1efb99688b4e26f3e5c0b51648fd34421c97f9c910e5084f94" exitCode=0 Feb 28 04:00:26 crc kubenswrapper[4624]: I0228 04:00:26.161771 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" event={"ID":"2febede5-9921-477c-94f4-496013e7274a","Type":"ContainerDied","Data":"0d183760a036fd1efb99688b4e26f3e5c0b51648fd34421c97f9c910e5084f94"} Feb 28 04:00:27 crc kubenswrapper[4624]: I0228 04:00:27.604565 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:27 crc kubenswrapper[4624]: I0228 04:00:27.722605 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kbc5\" (UniqueName: \"kubernetes.io/projected/2febede5-9921-477c-94f4-496013e7274a-kube-api-access-2kbc5\") pod \"2febede5-9921-477c-94f4-496013e7274a\" (UID: \"2febede5-9921-477c-94f4-496013e7274a\") " Feb 28 04:00:27 crc kubenswrapper[4624]: I0228 04:00:27.730899 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2febede5-9921-477c-94f4-496013e7274a-kube-api-access-2kbc5" (OuterVolumeSpecName: "kube-api-access-2kbc5") pod "2febede5-9921-477c-94f4-496013e7274a" (UID: "2febede5-9921-477c-94f4-496013e7274a"). InnerVolumeSpecName "kube-api-access-2kbc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:00:27 crc kubenswrapper[4624]: I0228 04:00:27.826054 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kbc5\" (UniqueName: \"kubernetes.io/projected/2febede5-9921-477c-94f4-496013e7274a-kube-api-access-2kbc5\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:28 crc kubenswrapper[4624]: I0228 04:00:28.187455 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" event={"ID":"2febede5-9921-477c-94f4-496013e7274a","Type":"ContainerDied","Data":"b639e08afc0ef231cb4110d98765e290f083d64910a488ab16b1feab1c1efcf1"} Feb 28 04:00:28 crc kubenswrapper[4624]: I0228 04:00:28.187767 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b639e08afc0ef231cb4110d98765e290f083d64910a488ab16b1feab1c1efcf1" Feb 28 04:00:28 crc kubenswrapper[4624]: I0228 04:00:28.187556 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537520-zqh2r" Feb 28 04:00:28 crc kubenswrapper[4624]: I0228 04:00:28.258142 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537514-44w2r"] Feb 28 04:00:28 crc kubenswrapper[4624]: I0228 04:00:28.268821 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537514-44w2r"] Feb 28 04:00:30 crc kubenswrapper[4624]: I0228 04:00:30.099975 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e" path="/var/lib/kubelet/pods/2d2cb73e-4c36-4a5b-94ee-3d8654d16c0e/volumes" Feb 28 04:00:30 crc kubenswrapper[4624]: I0228 04:00:30.225073 4624 generic.go:334] "Generic (PLEG): container finished" podID="435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" containerID="a0a7e12f9082a885867e03535e606e7b272c44958dcdd3f492a23703bbfc04ce" exitCode=0 Feb 28 04:00:30 crc kubenswrapper[4624]: I0228 04:00:30.225516 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" event={"ID":"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51","Type":"ContainerDied","Data":"a0a7e12f9082a885867e03535e606e7b272c44958dcdd3f492a23703bbfc04ce"} Feb 28 04:00:31 crc kubenswrapper[4624]: I0228 04:00:31.905751 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.066805 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-ssh-key-openstack-edpm-ipam\") pod \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.067499 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj5b6\" (UniqueName: \"kubernetes.io/projected/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-kube-api-access-jj5b6\") pod \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.067760 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-inventory\") pod \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.067903 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-repo-setup-combined-ca-bundle\") pod \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\" (UID: \"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51\") " Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.104691 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" (UID: "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.106423 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-kube-api-access-jj5b6" (OuterVolumeSpecName: "kube-api-access-jj5b6") pod "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" (UID: "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51"). InnerVolumeSpecName "kube-api-access-jj5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.111963 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-inventory" (OuterVolumeSpecName: "inventory") pod "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" (UID: "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.135028 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" (UID: "435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.170578 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.170620 4624 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.170630 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.170641 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj5b6\" (UniqueName: \"kubernetes.io/projected/435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51-kube-api-access-jj5b6\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.255003 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" event={"ID":"435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51","Type":"ContainerDied","Data":"fc64ebbbde99b1618e716dda694fe9abc8822a5971d24e8da045caba19fc6fd2"} Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.255438 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc64ebbbde99b1618e716dda694fe9abc8822a5971d24e8da045caba19fc6fd2" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.255165 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.352473 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg"] Feb 28 04:00:32 crc kubenswrapper[4624]: E0228 04:00:32.352991 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2febede5-9921-477c-94f4-496013e7274a" containerName="oc" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.353013 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2febede5-9921-477c-94f4-496013e7274a" containerName="oc" Feb 28 04:00:32 crc kubenswrapper[4624]: E0228 04:00:32.353046 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.353057 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.353323 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.353358 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2febede5-9921-477c-94f4-496013e7274a" containerName="oc" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.354172 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.356614 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.356685 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.357379 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.357882 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.372242 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg"] Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.477554 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.477986 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fx6t\" (UniqueName: \"kubernetes.io/projected/985adc96-94ed-4823-a477-f222def355a1-kube-api-access-7fx6t\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.478144 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.580010 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.580068 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fx6t\" (UniqueName: \"kubernetes.io/projected/985adc96-94ed-4823-a477-f222def355a1-kube-api-access-7fx6t\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.580130 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.588029 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.595838 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.605385 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fx6t\" (UniqueName: \"kubernetes.io/projected/985adc96-94ed-4823-a477-f222def355a1-kube-api-access-7fx6t\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tb8rg\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:32 crc kubenswrapper[4624]: I0228 04:00:32.671513 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:33 crc kubenswrapper[4624]: I0228 04:00:33.293612 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg"] Feb 28 04:00:33 crc kubenswrapper[4624]: W0228 04:00:33.293719 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod985adc96_94ed_4823_a477_f222def355a1.slice/crio-375f786fb3e6ca8fbce34529ee1c1dc8a2c3f132be5a4d2b65ce968326196501 WatchSource:0}: Error finding container 375f786fb3e6ca8fbce34529ee1c1dc8a2c3f132be5a4d2b65ce968326196501: Status 404 returned error can't find the container with id 375f786fb3e6ca8fbce34529ee1c1dc8a2c3f132be5a4d2b65ce968326196501 Feb 28 04:00:33 crc kubenswrapper[4624]: I0228 04:00:33.328292 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:33 crc kubenswrapper[4624]: I0228 04:00:33.328365 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:34 crc kubenswrapper[4624]: I0228 04:00:34.280809 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" event={"ID":"985adc96-94ed-4823-a477-f222def355a1","Type":"ContainerStarted","Data":"8bf26a059416b985b1e19a5b93a67989c5cd832891cff1f08d7006125a20760c"} Feb 28 04:00:34 crc kubenswrapper[4624]: I0228 04:00:34.281326 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" event={"ID":"985adc96-94ed-4823-a477-f222def355a1","Type":"ContainerStarted","Data":"375f786fb3e6ca8fbce34529ee1c1dc8a2c3f132be5a4d2b65ce968326196501"} Feb 28 04:00:34 crc kubenswrapper[4624]: I0228 04:00:34.309786 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" podStartSLOduration=1.829805042 podStartE2EDuration="2.309756105s" podCreationTimestamp="2026-02-28 04:00:32 +0000 UTC" firstStartedPulling="2026-02-28 04:00:33.297478802 +0000 UTC m=+1487.961518111" lastFinishedPulling="2026-02-28 04:00:33.777429855 +0000 UTC m=+1488.441469174" observedRunningTime="2026-02-28 04:00:34.303382473 +0000 UTC m=+1488.967421822" watchObservedRunningTime="2026-02-28 04:00:34.309756105 +0000 UTC m=+1488.973795414" Feb 28 04:00:34 crc kubenswrapper[4624]: I0228 04:00:34.374497 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ztvjp" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="registry-server" probeResult="failure" output=< Feb 28 04:00:34 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:00:34 crc kubenswrapper[4624]: > Feb 28 04:00:37 crc kubenswrapper[4624]: I0228 04:00:37.338953 4624 generic.go:334] "Generic (PLEG): container finished" podID="985adc96-94ed-4823-a477-f222def355a1" containerID="8bf26a059416b985b1e19a5b93a67989c5cd832891cff1f08d7006125a20760c" exitCode=0 Feb 28 04:00:37 crc kubenswrapper[4624]: I0228 04:00:37.339068 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" event={"ID":"985adc96-94ed-4823-a477-f222def355a1","Type":"ContainerDied","Data":"8bf26a059416b985b1e19a5b93a67989c5cd832891cff1f08d7006125a20760c"} Feb 28 04:00:38 crc kubenswrapper[4624]: I0228 04:00:38.882073 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.048862 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-inventory\") pod \"985adc96-94ed-4823-a477-f222def355a1\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.049033 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fx6t\" (UniqueName: \"kubernetes.io/projected/985adc96-94ed-4823-a477-f222def355a1-kube-api-access-7fx6t\") pod \"985adc96-94ed-4823-a477-f222def355a1\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.049300 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-ssh-key-openstack-edpm-ipam\") pod \"985adc96-94ed-4823-a477-f222def355a1\" (UID: \"985adc96-94ed-4823-a477-f222def355a1\") " Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.057358 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985adc96-94ed-4823-a477-f222def355a1-kube-api-access-7fx6t" (OuterVolumeSpecName: "kube-api-access-7fx6t") pod "985adc96-94ed-4823-a477-f222def355a1" (UID: "985adc96-94ed-4823-a477-f222def355a1"). InnerVolumeSpecName "kube-api-access-7fx6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.086155 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "985adc96-94ed-4823-a477-f222def355a1" (UID: "985adc96-94ed-4823-a477-f222def355a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.089858 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-inventory" (OuterVolumeSpecName: "inventory") pod "985adc96-94ed-4823-a477-f222def355a1" (UID: "985adc96-94ed-4823-a477-f222def355a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.152639 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fx6t\" (UniqueName: \"kubernetes.io/projected/985adc96-94ed-4823-a477-f222def355a1-kube-api-access-7fx6t\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.152697 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.152723 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/985adc96-94ed-4823-a477-f222def355a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.367517 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" event={"ID":"985adc96-94ed-4823-a477-f222def355a1","Type":"ContainerDied","Data":"375f786fb3e6ca8fbce34529ee1c1dc8a2c3f132be5a4d2b65ce968326196501"} Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.367905 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375f786fb3e6ca8fbce34529ee1c1dc8a2c3f132be5a4d2b65ce968326196501" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.368226 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tb8rg" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.530527 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86"] Feb 28 04:00:39 crc kubenswrapper[4624]: E0228 04:00:39.531221 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985adc96-94ed-4823-a477-f222def355a1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.531243 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="985adc96-94ed-4823-a477-f222def355a1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.531558 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="985adc96-94ed-4823-a477-f222def355a1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.532492 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.535644 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.535862 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.536032 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.536180 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.543232 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86"] Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.674371 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pd4l\" (UniqueName: \"kubernetes.io/projected/b81c936c-7c68-4155-bee6-b4fab7bc44e8-kube-api-access-5pd4l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.674479 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.674550 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.674593 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.778151 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.778687 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.778787 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pd4l\" (UniqueName: \"kubernetes.io/projected/b81c936c-7c68-4155-bee6-b4fab7bc44e8-kube-api-access-5pd4l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.778926 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.782068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.782953 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.789782 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.799803 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pd4l\" (UniqueName: \"kubernetes.io/projected/b81c936c-7c68-4155-bee6-b4fab7bc44e8-kube-api-access-5pd4l\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:39 crc kubenswrapper[4624]: I0228 04:00:39.848446 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:00:40 crc kubenswrapper[4624]: I0228 04:00:40.443874 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86"] Feb 28 04:00:41 crc kubenswrapper[4624]: I0228 04:00:41.392972 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" event={"ID":"b81c936c-7c68-4155-bee6-b4fab7bc44e8","Type":"ContainerStarted","Data":"e16797efaf9d197270507b37e25e2bd3098fedc76bb5af81c23c6c9954b83d03"} Feb 28 04:00:41 crc kubenswrapper[4624]: I0228 04:00:41.393451 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" event={"ID":"b81c936c-7c68-4155-bee6-b4fab7bc44e8","Type":"ContainerStarted","Data":"ce885cd39097c3ace556b3b7862b501e75e0fca19e117dd1e48ad7a0ce7cc6ee"} Feb 28 04:00:43 crc kubenswrapper[4624]: I0228 04:00:43.384280 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:43 crc kubenswrapper[4624]: I0228 04:00:43.430044 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" podStartSLOduration=3.760529193 podStartE2EDuration="4.430013873s" podCreationTimestamp="2026-02-28 04:00:39 +0000 UTC" firstStartedPulling="2026-02-28 04:00:40.454179095 +0000 UTC m=+1495.118218404" lastFinishedPulling="2026-02-28 04:00:41.123663775 +0000 UTC m=+1495.787703084" observedRunningTime="2026-02-28 04:00:41.412992761 +0000 UTC m=+1496.077032070" watchObservedRunningTime="2026-02-28 04:00:43.430013873 +0000 UTC m=+1498.094053222" Feb 28 04:00:43 crc kubenswrapper[4624]: I0228 04:00:43.455354 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:44 crc kubenswrapper[4624]: I0228 04:00:44.181320 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztvjp"] Feb 28 04:00:44 crc kubenswrapper[4624]: I0228 04:00:44.435698 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ztvjp" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="registry-server" containerID="cri-o://c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241" gracePeriod=2 Feb 28 04:00:44 crc kubenswrapper[4624]: I0228 04:00:44.987643 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.105296 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-utilities\") pod \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.105949 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxksb\" (UniqueName: \"kubernetes.io/projected/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-kube-api-access-zxksb\") pod \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.106355 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-catalog-content\") pod \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\" (UID: \"a46f00fe-c0ef-4e86-910f-6fd75b840fa4\") " Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.108247 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-utilities" (OuterVolumeSpecName: "utilities") pod "a46f00fe-c0ef-4e86-910f-6fd75b840fa4" (UID: "a46f00fe-c0ef-4e86-910f-6fd75b840fa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.116656 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-kube-api-access-zxksb" (OuterVolumeSpecName: "kube-api-access-zxksb") pod "a46f00fe-c0ef-4e86-910f-6fd75b840fa4" (UID: "a46f00fe-c0ef-4e86-910f-6fd75b840fa4"). InnerVolumeSpecName "kube-api-access-zxksb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.173493 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a46f00fe-c0ef-4e86-910f-6fd75b840fa4" (UID: "a46f00fe-c0ef-4e86-910f-6fd75b840fa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.209132 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.209167 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxksb\" (UniqueName: \"kubernetes.io/projected/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-kube-api-access-zxksb\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.209180 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46f00fe-c0ef-4e86-910f-6fd75b840fa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.451547 4624 generic.go:334] "Generic (PLEG): container finished" podID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerID="c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241" exitCode=0 Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.451629 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerDied","Data":"c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241"} Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.451695 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztvjp" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.451724 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztvjp" event={"ID":"a46f00fe-c0ef-4e86-910f-6fd75b840fa4","Type":"ContainerDied","Data":"65749f1d33de5822ca4c7a30262dfd09805bce29e66f5275bd066430c5641544"} Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.451755 4624 scope.go:117] "RemoveContainer" containerID="c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.491240 4624 scope.go:117] "RemoveContainer" containerID="2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.506852 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztvjp"] Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.520181 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ztvjp"] Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.527672 4624 scope.go:117] "RemoveContainer" containerID="7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.581580 4624 scope.go:117] "RemoveContainer" containerID="c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241" Feb 28 04:00:45 crc kubenswrapper[4624]: E0228 04:00:45.582329 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241\": container with ID starting with c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241 not found: ID does not exist" containerID="c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.582373 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241"} err="failed to get container status \"c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241\": rpc error: code = NotFound desc = could not find container \"c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241\": container with ID starting with c01807be79d8202b77c359e9b7fdf3cd92a92c960415a00eb74b2bc3d40fd241 not found: ID does not exist" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.582404 4624 scope.go:117] "RemoveContainer" containerID="2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455" Feb 28 04:00:45 crc kubenswrapper[4624]: E0228 04:00:45.582877 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455\": container with ID starting with 2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455 not found: ID does not exist" containerID="2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.582907 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455"} err="failed to get container status \"2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455\": rpc error: code = NotFound desc = could not find container \"2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455\": container with ID starting with 2ef01c425eb28976a17ea8e4a25ae5bdb7c9d072d5fea7936e8f228f217f9455 not found: ID does not exist" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.582924 4624 scope.go:117] "RemoveContainer" containerID="7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164" Feb 28 04:00:45 crc kubenswrapper[4624]: E0228 04:00:45.583290 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164\": container with ID starting with 7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164 not found: ID does not exist" containerID="7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164" Feb 28 04:00:45 crc kubenswrapper[4624]: I0228 04:00:45.583318 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164"} err="failed to get container status \"7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164\": rpc error: code = NotFound desc = could not find container \"7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164\": container with ID starting with 7dad0be037bb161eb551f71aeea99a8e8810eac6582ee34e9c35c8a8f7714164 not found: ID does not exist" Feb 28 04:00:46 crc kubenswrapper[4624]: I0228 04:00:46.122720 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" path="/var/lib/kubelet/pods/a46f00fe-c0ef-4e86-910f-6fd75b840fa4/volumes" Feb 28 04:00:58 crc kubenswrapper[4624]: I0228 04:00:58.618877 4624 scope.go:117] "RemoveContainer" containerID="1cc0fca455bd32cd8d0ba414a92b4450c6254c4d8ce0c962d131629f93437410" Feb 28 04:00:58 crc kubenswrapper[4624]: I0228 04:00:58.665815 4624 scope.go:117] "RemoveContainer" containerID="2eddc4df89acc270d9d8a7160b4029944058c3e59c39dbb6367ad09756366451" Feb 28 04:00:58 crc kubenswrapper[4624]: I0228 04:00:58.710079 4624 scope.go:117] "RemoveContainer" containerID="326d330c54445bd9e64549f8b30ab483b045f138c92e5c8ca397eb281e7d7876" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.157876 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29537521-9tntm"] Feb 28 04:01:00 crc kubenswrapper[4624]: E0228 04:01:00.159626 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="extract-utilities" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.159734 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="extract-utilities" Feb 28 04:01:00 crc kubenswrapper[4624]: E0228 04:01:00.159972 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="extract-content" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.160056 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="extract-content" Feb 28 04:01:00 crc kubenswrapper[4624]: E0228 04:01:00.160190 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="registry-server" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.160276 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="registry-server" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.160600 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46f00fe-c0ef-4e86-910f-6fd75b840fa4" containerName="registry-server" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.161631 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.177246 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537521-9tntm"] Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.338778 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-fernet-keys\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.338862 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-combined-ca-bundle\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.338920 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zg2h\" (UniqueName: \"kubernetes.io/projected/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-kube-api-access-8zg2h\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.338965 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-config-data\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.441386 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-fernet-keys\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.441870 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-combined-ca-bundle\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.442043 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zg2h\" (UniqueName: \"kubernetes.io/projected/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-kube-api-access-8zg2h\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.442169 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-config-data\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.452528 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-combined-ca-bundle\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.452550 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-fernet-keys\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.452797 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-config-data\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.463863 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zg2h\" (UniqueName: \"kubernetes.io/projected/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-kube-api-access-8zg2h\") pod \"keystone-cron-29537521-9tntm\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:00 crc kubenswrapper[4624]: I0228 04:01:00.537299 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:01 crc kubenswrapper[4624]: I0228 04:01:01.046016 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537521-9tntm"] Feb 28 04:01:01 crc kubenswrapper[4624]: I0228 04:01:01.688075 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537521-9tntm" event={"ID":"95510d44-6b29-45b3-b0a0-4a6ad761fa4e","Type":"ContainerStarted","Data":"8afead0b091df9604dd4226b9904abaa282da043b150403fbf87585da9aa0d98"} Feb 28 04:01:01 crc kubenswrapper[4624]: I0228 04:01:01.688532 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537521-9tntm" event={"ID":"95510d44-6b29-45b3-b0a0-4a6ad761fa4e","Type":"ContainerStarted","Data":"2461d6ae348bcf301183114965c07058c0bfceb6ecc7446effa3a4d3a391d50a"} Feb 28 04:01:01 crc kubenswrapper[4624]: I0228 04:01:01.708964 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29537521-9tntm" podStartSLOduration=1.708942478 podStartE2EDuration="1.708942478s" podCreationTimestamp="2026-02-28 04:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:01:01.703793609 +0000 UTC m=+1516.367832918" watchObservedRunningTime="2026-02-28 04:01:01.708942478 +0000 UTC m=+1516.372981777" Feb 28 04:01:05 crc kubenswrapper[4624]: I0228 04:01:05.755015 4624 generic.go:334] "Generic (PLEG): container finished" podID="95510d44-6b29-45b3-b0a0-4a6ad761fa4e" containerID="8afead0b091df9604dd4226b9904abaa282da043b150403fbf87585da9aa0d98" exitCode=0 Feb 28 04:01:05 crc kubenswrapper[4624]: I0228 04:01:05.755147 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537521-9tntm" event={"ID":"95510d44-6b29-45b3-b0a0-4a6ad761fa4e","Type":"ContainerDied","Data":"8afead0b091df9604dd4226b9904abaa282da043b150403fbf87585da9aa0d98"} Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.201332 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.355047 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-combined-ca-bundle\") pod \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.355185 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zg2h\" (UniqueName: \"kubernetes.io/projected/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-kube-api-access-8zg2h\") pod \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.355451 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-fernet-keys\") pod \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.355498 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-config-data\") pod \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\" (UID: \"95510d44-6b29-45b3-b0a0-4a6ad761fa4e\") " Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.362051 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-kube-api-access-8zg2h" (OuterVolumeSpecName: "kube-api-access-8zg2h") pod "95510d44-6b29-45b3-b0a0-4a6ad761fa4e" (UID: "95510d44-6b29-45b3-b0a0-4a6ad761fa4e"). InnerVolumeSpecName "kube-api-access-8zg2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.374392 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "95510d44-6b29-45b3-b0a0-4a6ad761fa4e" (UID: "95510d44-6b29-45b3-b0a0-4a6ad761fa4e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.392247 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95510d44-6b29-45b3-b0a0-4a6ad761fa4e" (UID: "95510d44-6b29-45b3-b0a0-4a6ad761fa4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.458945 4624 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.458987 4624 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.458998 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zg2h\" (UniqueName: \"kubernetes.io/projected/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-kube-api-access-8zg2h\") on node \"crc\" DevicePath \"\"" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.485268 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-config-data" (OuterVolumeSpecName: "config-data") pod "95510d44-6b29-45b3-b0a0-4a6ad761fa4e" (UID: "95510d44-6b29-45b3-b0a0-4a6ad761fa4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.561842 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95510d44-6b29-45b3-b0a0-4a6ad761fa4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.777144 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537521-9tntm" event={"ID":"95510d44-6b29-45b3-b0a0-4a6ad761fa4e","Type":"ContainerDied","Data":"2461d6ae348bcf301183114965c07058c0bfceb6ecc7446effa3a4d3a391d50a"} Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.777195 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2461d6ae348bcf301183114965c07058c0bfceb6ecc7446effa3a4d3a391d50a" Feb 28 04:01:07 crc kubenswrapper[4624]: I0228 04:01:07.777230 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537521-9tntm" Feb 28 04:01:19 crc kubenswrapper[4624]: I0228 04:01:19.540028 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:01:19 crc kubenswrapper[4624]: I0228 04:01:19.540982 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:01:49 crc kubenswrapper[4624]: I0228 04:01:49.539999 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:01:49 crc kubenswrapper[4624]: I0228 04:01:49.541026 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:01:58 crc kubenswrapper[4624]: I0228 04:01:58.856196 4624 scope.go:117] "RemoveContainer" containerID="18e8b0adf8dacee2279e5b14dd8b365ee82893628728eec9f75da364da975843" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.157450 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537522-zcrkb"] Feb 28 04:02:00 crc kubenswrapper[4624]: E0228 04:02:00.157907 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95510d44-6b29-45b3-b0a0-4a6ad761fa4e" containerName="keystone-cron" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.157922 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="95510d44-6b29-45b3-b0a0-4a6ad761fa4e" containerName="keystone-cron" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.158550 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="95510d44-6b29-45b3-b0a0-4a6ad761fa4e" containerName="keystone-cron" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.159273 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.162368 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.162448 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.165289 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.184880 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2k9g\" (UniqueName: \"kubernetes.io/projected/41b5e045-6018-480a-9615-d545e8d4921d-kube-api-access-f2k9g\") pod \"auto-csr-approver-29537522-zcrkb\" (UID: \"41b5e045-6018-480a-9615-d545e8d4921d\") " pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.232556 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537522-zcrkb"] Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.287507 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2k9g\" (UniqueName: \"kubernetes.io/projected/41b5e045-6018-480a-9615-d545e8d4921d-kube-api-access-f2k9g\") pod \"auto-csr-approver-29537522-zcrkb\" (UID: \"41b5e045-6018-480a-9615-d545e8d4921d\") " pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.307400 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2k9g\" (UniqueName: \"kubernetes.io/projected/41b5e045-6018-480a-9615-d545e8d4921d-kube-api-access-f2k9g\") pod \"auto-csr-approver-29537522-zcrkb\" (UID: \"41b5e045-6018-480a-9615-d545e8d4921d\") " pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:00 crc kubenswrapper[4624]: I0228 04:02:00.537736 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:01 crc kubenswrapper[4624]: I0228 04:02:01.213685 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537522-zcrkb"] Feb 28 04:02:01 crc kubenswrapper[4624]: I0228 04:02:01.443181 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" event={"ID":"41b5e045-6018-480a-9615-d545e8d4921d","Type":"ContainerStarted","Data":"e06126b4aeb3203ed1406f11a0933b403cf4e8efcd4b3847ec44a6fa4acb3fd9"} Feb 28 04:02:03 crc kubenswrapper[4624]: I0228 04:02:03.471203 4624 generic.go:334] "Generic (PLEG): container finished" podID="41b5e045-6018-480a-9615-d545e8d4921d" containerID="65234ee8a5d9538ddd5db7eb9a455c8ea2e2a70eacfb25f2572f7972ef74a39b" exitCode=0 Feb 28 04:02:03 crc kubenswrapper[4624]: I0228 04:02:03.471340 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" event={"ID":"41b5e045-6018-480a-9615-d545e8d4921d","Type":"ContainerDied","Data":"65234ee8a5d9538ddd5db7eb9a455c8ea2e2a70eacfb25f2572f7972ef74a39b"} Feb 28 04:02:04 crc kubenswrapper[4624]: I0228 04:02:04.928506 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:05 crc kubenswrapper[4624]: I0228 04:02:05.095374 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2k9g\" (UniqueName: \"kubernetes.io/projected/41b5e045-6018-480a-9615-d545e8d4921d-kube-api-access-f2k9g\") pod \"41b5e045-6018-480a-9615-d545e8d4921d\" (UID: \"41b5e045-6018-480a-9615-d545e8d4921d\") " Feb 28 04:02:05 crc kubenswrapper[4624]: I0228 04:02:05.110758 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b5e045-6018-480a-9615-d545e8d4921d-kube-api-access-f2k9g" (OuterVolumeSpecName: "kube-api-access-f2k9g") pod "41b5e045-6018-480a-9615-d545e8d4921d" (UID: "41b5e045-6018-480a-9615-d545e8d4921d"). InnerVolumeSpecName "kube-api-access-f2k9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:02:05 crc kubenswrapper[4624]: I0228 04:02:05.199311 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2k9g\" (UniqueName: \"kubernetes.io/projected/41b5e045-6018-480a-9615-d545e8d4921d-kube-api-access-f2k9g\") on node \"crc\" DevicePath \"\"" Feb 28 04:02:05 crc kubenswrapper[4624]: I0228 04:02:05.494299 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" event={"ID":"41b5e045-6018-480a-9615-d545e8d4921d","Type":"ContainerDied","Data":"e06126b4aeb3203ed1406f11a0933b403cf4e8efcd4b3847ec44a6fa4acb3fd9"} Feb 28 04:02:05 crc kubenswrapper[4624]: I0228 04:02:05.494355 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06126b4aeb3203ed1406f11a0933b403cf4e8efcd4b3847ec44a6fa4acb3fd9" Feb 28 04:02:05 crc kubenswrapper[4624]: I0228 04:02:05.494352 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537522-zcrkb" Feb 28 04:02:06 crc kubenswrapper[4624]: I0228 04:02:06.018729 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537516-ggl8l"] Feb 28 04:02:06 crc kubenswrapper[4624]: I0228 04:02:06.028350 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537516-ggl8l"] Feb 28 04:02:06 crc kubenswrapper[4624]: I0228 04:02:06.103019 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57d1ad8-41f4-45c4-8823-9b854dcf073e" path="/var/lib/kubelet/pods/c57d1ad8-41f4-45c4-8823-9b854dcf073e/volumes" Feb 28 04:02:19 crc kubenswrapper[4624]: I0228 04:02:19.539701 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:02:19 crc kubenswrapper[4624]: I0228 04:02:19.540247 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:02:19 crc kubenswrapper[4624]: I0228 04:02:19.540305 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:02:19 crc kubenswrapper[4624]: I0228 04:02:19.541346 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:02:19 crc kubenswrapper[4624]: I0228 04:02:19.541406 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" gracePeriod=600 Feb 28 04:02:19 crc kubenswrapper[4624]: E0228 04:02:19.689005 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:02:20 crc kubenswrapper[4624]: I0228 04:02:20.670616 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" exitCode=0 Feb 28 04:02:20 crc kubenswrapper[4624]: I0228 04:02:20.670811 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a"} Feb 28 04:02:20 crc kubenswrapper[4624]: I0228 04:02:20.670960 4624 scope.go:117] "RemoveContainer" containerID="3e622bb9b59d75db81661ea3bbd98b8b7ddaf42dfc4fa7c42ffda82eb1458133" Feb 28 04:02:20 crc kubenswrapper[4624]: I0228 04:02:20.672316 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:02:20 crc kubenswrapper[4624]: E0228 04:02:20.673056 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.677997 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2w2"] Feb 28 04:02:28 crc kubenswrapper[4624]: E0228 04:02:28.679667 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b5e045-6018-480a-9615-d545e8d4921d" containerName="oc" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.679686 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b5e045-6018-480a-9615-d545e8d4921d" containerName="oc" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.679958 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b5e045-6018-480a-9615-d545e8d4921d" containerName="oc" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.682292 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.703157 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2w2"] Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.850146 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8dfj\" (UniqueName: \"kubernetes.io/projected/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-kube-api-access-c8dfj\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.850206 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-catalog-content\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.850331 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-utilities\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.952311 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8dfj\" (UniqueName: \"kubernetes.io/projected/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-kube-api-access-c8dfj\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.952377 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-catalog-content\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.952522 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-utilities\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.954212 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-catalog-content\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.954270 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-utilities\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:28 crc kubenswrapper[4624]: I0228 04:02:28.979669 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8dfj\" (UniqueName: \"kubernetes.io/projected/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-kube-api-access-c8dfj\") pod \"redhat-marketplace-wt2w2\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:29 crc kubenswrapper[4624]: I0228 04:02:29.014508 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:29 crc kubenswrapper[4624]: I0228 04:02:29.505435 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2w2"] Feb 28 04:02:29 crc kubenswrapper[4624]: I0228 04:02:29.783437 4624 generic.go:334] "Generic (PLEG): container finished" podID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerID="8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348" exitCode=0 Feb 28 04:02:29 crc kubenswrapper[4624]: I0228 04:02:29.783569 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerDied","Data":"8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348"} Feb 28 04:02:29 crc kubenswrapper[4624]: I0228 04:02:29.783751 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerStarted","Data":"16aa06ed25ed88aae932fd16fedfc64f4d5fc5b76f49f0a96cc93f2838a5e4c6"} Feb 28 04:02:31 crc kubenswrapper[4624]: I0228 04:02:31.806056 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerStarted","Data":"00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4"} Feb 28 04:02:32 crc kubenswrapper[4624]: I0228 04:02:32.818398 4624 generic.go:334] "Generic (PLEG): container finished" podID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerID="00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4" exitCode=0 Feb 28 04:02:32 crc kubenswrapper[4624]: I0228 04:02:32.819194 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerDied","Data":"00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4"} Feb 28 04:02:33 crc kubenswrapper[4624]: I0228 04:02:33.830516 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerStarted","Data":"92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527"} Feb 28 04:02:33 crc kubenswrapper[4624]: I0228 04:02:33.859232 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wt2w2" podStartSLOduration=2.091826805 podStartE2EDuration="5.859209733s" podCreationTimestamp="2026-02-28 04:02:28 +0000 UTC" firstStartedPulling="2026-02-28 04:02:29.78540132 +0000 UTC m=+1604.449440629" lastFinishedPulling="2026-02-28 04:02:33.552784248 +0000 UTC m=+1608.216823557" observedRunningTime="2026-02-28 04:02:33.858720029 +0000 UTC m=+1608.522759338" watchObservedRunningTime="2026-02-28 04:02:33.859209733 +0000 UTC m=+1608.523249042" Feb 28 04:02:35 crc kubenswrapper[4624]: I0228 04:02:35.087371 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:02:35 crc kubenswrapper[4624]: E0228 04:02:35.087975 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:02:39 crc kubenswrapper[4624]: I0228 04:02:39.014972 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:39 crc kubenswrapper[4624]: I0228 04:02:39.016768 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:40 crc kubenswrapper[4624]: I0228 04:02:40.065367 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wt2w2" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="registry-server" probeResult="failure" output=< Feb 28 04:02:40 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:02:40 crc kubenswrapper[4624]: > Feb 28 04:02:49 crc kubenswrapper[4624]: I0228 04:02:49.081881 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:49 crc kubenswrapper[4624]: I0228 04:02:49.144330 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:49 crc kubenswrapper[4624]: I0228 04:02:49.329210 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2w2"] Feb 28 04:02:50 crc kubenswrapper[4624]: I0228 04:02:50.089295 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:02:50 crc kubenswrapper[4624]: E0228 04:02:50.091182 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.021873 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wt2w2" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="registry-server" containerID="cri-o://92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527" gracePeriod=2 Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.610244 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.681525 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-utilities\") pod \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.681657 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-catalog-content\") pod \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.681962 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8dfj\" (UniqueName: \"kubernetes.io/projected/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-kube-api-access-c8dfj\") pod \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\" (UID: \"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b\") " Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.682499 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-utilities" (OuterVolumeSpecName: "utilities") pod "b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" (UID: "b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.692418 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-kube-api-access-c8dfj" (OuterVolumeSpecName: "kube-api-access-c8dfj") pod "b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" (UID: "b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b"). InnerVolumeSpecName "kube-api-access-c8dfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.720716 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" (UID: "b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.784792 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8dfj\" (UniqueName: \"kubernetes.io/projected/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-kube-api-access-c8dfj\") on node \"crc\" DevicePath \"\"" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.784831 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:02:51 crc kubenswrapper[4624]: I0228 04:02:51.784842 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.034762 4624 generic.go:334] "Generic (PLEG): container finished" podID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerID="92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527" exitCode=0 Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.034832 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerDied","Data":"92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527"} Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.034868 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2w2" event={"ID":"b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b","Type":"ContainerDied","Data":"16aa06ed25ed88aae932fd16fedfc64f4d5fc5b76f49f0a96cc93f2838a5e4c6"} Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.034888 4624 scope.go:117] "RemoveContainer" containerID="92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.035072 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2w2" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.069387 4624 scope.go:117] "RemoveContainer" containerID="00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.101431 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2w2"] Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.111182 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2w2"] Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.130694 4624 scope.go:117] "RemoveContainer" containerID="8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.189278 4624 scope.go:117] "RemoveContainer" containerID="92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527" Feb 28 04:02:52 crc kubenswrapper[4624]: E0228 04:02:52.190045 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527\": container with ID starting with 92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527 not found: ID does not exist" containerID="92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.190108 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527"} err="failed to get container status \"92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527\": rpc error: code = NotFound desc = could not find container \"92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527\": container with ID starting with 92b47f05f4617cc9813672f04f424433856f79379e5d20c55451c9ae5e3c1527 not found: ID does not exist" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.190139 4624 scope.go:117] "RemoveContainer" containerID="00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4" Feb 28 04:02:52 crc kubenswrapper[4624]: E0228 04:02:52.190652 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4\": container with ID starting with 00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4 not found: ID does not exist" containerID="00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.190703 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4"} err="failed to get container status \"00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4\": rpc error: code = NotFound desc = could not find container \"00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4\": container with ID starting with 00c816e87ef935f8ede9db2c6bb051ff044e4cd2cbf174a6ac2ffb7dc639ecf4 not found: ID does not exist" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.190743 4624 scope.go:117] "RemoveContainer" containerID="8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348" Feb 28 04:02:52 crc kubenswrapper[4624]: E0228 04:02:52.191121 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348\": container with ID starting with 8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348 not found: ID does not exist" containerID="8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348" Feb 28 04:02:52 crc kubenswrapper[4624]: I0228 04:02:52.191151 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348"} err="failed to get container status \"8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348\": rpc error: code = NotFound desc = could not find container \"8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348\": container with ID starting with 8946616c28f27a8fef106b9bc5350b06428a6f875915bfe82558d4f8727fd348 not found: ID does not exist" Feb 28 04:02:54 crc kubenswrapper[4624]: I0228 04:02:54.100563 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" path="/var/lib/kubelet/pods/b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b/volumes" Feb 28 04:02:58 crc kubenswrapper[4624]: I0228 04:02:58.939862 4624 scope.go:117] "RemoveContainer" containerID="8ed4b96d3f0d604124aec8a4cd287291c1c0d013b02a3f4b74063ff10da4024c" Feb 28 04:03:00 crc kubenswrapper[4624]: I0228 04:03:00.777491 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:03:00 crc kubenswrapper[4624]: E0228 04:03:00.778190 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:03:03 crc kubenswrapper[4624]: I0228 04:03:03.423198 4624 patch_prober.go:28] interesting pod/route-controller-manager-d88d9fbf-2w78j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:03:03 crc kubenswrapper[4624]: I0228 04:03:03.424035 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-d88d9fbf-2w78j" podUID="15733fef-9371-4d87-919d-54841fb4719c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:03:12 crc kubenswrapper[4624]: I0228 04:03:12.089289 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:03:12 crc kubenswrapper[4624]: E0228 04:03:12.090377 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:03:23 crc kubenswrapper[4624]: I0228 04:03:23.052971 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-865f-account-create-update-qf8xq"] Feb 28 04:03:23 crc kubenswrapper[4624]: I0228 04:03:23.061790 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-skbgt"] Feb 28 04:03:23 crc kubenswrapper[4624]: I0228 04:03:23.074721 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-865f-account-create-update-qf8xq"] Feb 28 04:03:23 crc kubenswrapper[4624]: I0228 04:03:23.083043 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-skbgt"] Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.033747 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-m48t5"] Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.045256 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-m48t5"] Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.055450 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7d1b-account-create-update-2v8tl"] Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.066626 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7d1b-account-create-update-2v8tl"] Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.110642 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3" path="/var/lib/kubelet/pods/1921f1f5-2dcd-4574-b20c-c2a3c4b55cf3/volumes" Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.111844 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f97bebd-cc2b-4587-89d3-6f7c0f463c57" path="/var/lib/kubelet/pods/1f97bebd-cc2b-4587-89d3-6f7c0f463c57/volumes" Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.112680 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c120112d-dd0a-45a5-9cc8-90829cd3b434" path="/var/lib/kubelet/pods/c120112d-dd0a-45a5-9cc8-90829cd3b434/volumes" Feb 28 04:03:24 crc kubenswrapper[4624]: I0228 04:03:24.113478 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd94e772-912b-4331-8658-184ae20ef60b" path="/var/lib/kubelet/pods/dd94e772-912b-4331-8658-184ae20ef60b/volumes" Feb 28 04:03:25 crc kubenswrapper[4624]: I0228 04:03:25.087613 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:03:25 crc kubenswrapper[4624]: E0228 04:03:25.088126 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:03:27 crc kubenswrapper[4624]: I0228 04:03:27.039405 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7pjw8"] Feb 28 04:03:27 crc kubenswrapper[4624]: I0228 04:03:27.052057 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7pjw8"] Feb 28 04:03:28 crc kubenswrapper[4624]: I0228 04:03:28.070532 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6c58-account-create-update-lx7n2"] Feb 28 04:03:28 crc kubenswrapper[4624]: I0228 04:03:28.107560 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098b4982-cc16-4f80-97f7-fe2a7e29ec02" path="/var/lib/kubelet/pods/098b4982-cc16-4f80-97f7-fe2a7e29ec02/volumes" Feb 28 04:03:28 crc kubenswrapper[4624]: I0228 04:03:28.108354 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6c58-account-create-update-lx7n2"] Feb 28 04:03:30 crc kubenswrapper[4624]: I0228 04:03:30.103751 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015e6767-0363-4b22-83db-95e90db5e386" path="/var/lib/kubelet/pods/015e6767-0363-4b22-83db-95e90db5e386/volumes" Feb 28 04:03:37 crc kubenswrapper[4624]: I0228 04:03:37.087271 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:03:37 crc kubenswrapper[4624]: E0228 04:03:37.088852 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:03:40 crc kubenswrapper[4624]: I0228 04:03:40.055460 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jj9xk"] Feb 28 04:03:40 crc kubenswrapper[4624]: I0228 04:03:40.070658 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jj9xk"] Feb 28 04:03:40 crc kubenswrapper[4624]: I0228 04:03:40.102771 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb97dc8-c1d9-4d0b-82df-672a9d561356" path="/var/lib/kubelet/pods/7cb97dc8-c1d9-4d0b-82df-672a9d561356/volumes" Feb 28 04:03:49 crc kubenswrapper[4624]: I0228 04:03:49.088621 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:03:49 crc kubenswrapper[4624]: E0228 04:03:49.090038 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:03:52 crc kubenswrapper[4624]: I0228 04:03:52.032795 4624 generic.go:334] "Generic (PLEG): container finished" podID="b81c936c-7c68-4155-bee6-b4fab7bc44e8" containerID="e16797efaf9d197270507b37e25e2bd3098fedc76bb5af81c23c6c9954b83d03" exitCode=0 Feb 28 04:03:52 crc kubenswrapper[4624]: I0228 04:03:52.032923 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" event={"ID":"b81c936c-7c68-4155-bee6-b4fab7bc44e8","Type":"ContainerDied","Data":"e16797efaf9d197270507b37e25e2bd3098fedc76bb5af81c23c6c9954b83d03"} Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.594512 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.686583 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-inventory\") pod \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.686660 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-bootstrap-combined-ca-bundle\") pod \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.686726 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pd4l\" (UniqueName: \"kubernetes.io/projected/b81c936c-7c68-4155-bee6-b4fab7bc44e8-kube-api-access-5pd4l\") pod \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.686908 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-ssh-key-openstack-edpm-ipam\") pod \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\" (UID: \"b81c936c-7c68-4155-bee6-b4fab7bc44e8\") " Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.692738 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b81c936c-7c68-4155-bee6-b4fab7bc44e8" (UID: "b81c936c-7c68-4155-bee6-b4fab7bc44e8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.694227 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81c936c-7c68-4155-bee6-b4fab7bc44e8-kube-api-access-5pd4l" (OuterVolumeSpecName: "kube-api-access-5pd4l") pod "b81c936c-7c68-4155-bee6-b4fab7bc44e8" (UID: "b81c936c-7c68-4155-bee6-b4fab7bc44e8"). InnerVolumeSpecName "kube-api-access-5pd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.715377 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-inventory" (OuterVolumeSpecName: "inventory") pod "b81c936c-7c68-4155-bee6-b4fab7bc44e8" (UID: "b81c936c-7c68-4155-bee6-b4fab7bc44e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.725022 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b81c936c-7c68-4155-bee6-b4fab7bc44e8" (UID: "b81c936c-7c68-4155-bee6-b4fab7bc44e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.789064 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.789124 4624 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.789140 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pd4l\" (UniqueName: \"kubernetes.io/projected/b81c936c-7c68-4155-bee6-b4fab7bc44e8-kube-api-access-5pd4l\") on node \"crc\" DevicePath \"\"" Feb 28 04:03:53 crc kubenswrapper[4624]: I0228 04:03:53.789150 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b81c936c-7c68-4155-bee6-b4fab7bc44e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.111262 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.127465 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86" event={"ID":"b81c936c-7c68-4155-bee6-b4fab7bc44e8","Type":"ContainerDied","Data":"ce885cd39097c3ace556b3b7862b501e75e0fca19e117dd1e48ad7a0ce7cc6ee"} Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.127521 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce885cd39097c3ace556b3b7862b501e75e0fca19e117dd1e48ad7a0ce7cc6ee" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.166711 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp"] Feb 28 04:03:54 crc kubenswrapper[4624]: E0228 04:03:54.167420 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81c936c-7c68-4155-bee6-b4fab7bc44e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.167443 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81c936c-7c68-4155-bee6-b4fab7bc44e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 04:03:54 crc kubenswrapper[4624]: E0228 04:03:54.167452 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="extract-utilities" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.167459 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="extract-utilities" Feb 28 04:03:54 crc kubenswrapper[4624]: E0228 04:03:54.167469 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="registry-server" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.167475 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="registry-server" Feb 28 04:03:54 crc kubenswrapper[4624]: E0228 04:03:54.167514 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="extract-content" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.167520 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="extract-content" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.167692 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1082daa-a454-4c1d-a6ec-dc0dd4c8bd5b" containerName="registry-server" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.167715 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81c936c-7c68-4155-bee6-b4fab7bc44e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.170616 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.173572 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.174182 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.174358 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.174446 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.180169 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp"] Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.301987 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.302049 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vr9l\" (UniqueName: \"kubernetes.io/projected/23fb1205-74ef-497d-bbd0-10fff39c6a4a-kube-api-access-2vr9l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.302999 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.405321 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.405396 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.405434 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vr9l\" (UniqueName: \"kubernetes.io/projected/23fb1205-74ef-497d-bbd0-10fff39c6a4a-kube-api-access-2vr9l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.409695 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.411204 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.426499 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vr9l\" (UniqueName: \"kubernetes.io/projected/23fb1205-74ef-497d-bbd0-10fff39c6a4a-kube-api-access-2vr9l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jnldp\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:54 crc kubenswrapper[4624]: I0228 04:03:54.500211 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:03:55 crc kubenswrapper[4624]: I0228 04:03:55.161801 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp"] Feb 28 04:03:56 crc kubenswrapper[4624]: I0228 04:03:56.137793 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" event={"ID":"23fb1205-74ef-497d-bbd0-10fff39c6a4a","Type":"ContainerStarted","Data":"17450d3164172ca8c06f184f62e22f9f6aae358de405eb382d9c7296b15e6ec1"} Feb 28 04:03:56 crc kubenswrapper[4624]: I0228 04:03:56.138605 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" event={"ID":"23fb1205-74ef-497d-bbd0-10fff39c6a4a","Type":"ContainerStarted","Data":"160216a3cbda9f232cbe1789be040fcaefaec161d0ada78fb41823d8b93884be"} Feb 28 04:03:56 crc kubenswrapper[4624]: I0228 04:03:56.163386 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" podStartSLOduration=1.7305534219999998 podStartE2EDuration="2.163357443s" podCreationTimestamp="2026-02-28 04:03:54 +0000 UTC" firstStartedPulling="2026-02-28 04:03:55.175928987 +0000 UTC m=+1689.839968306" lastFinishedPulling="2026-02-28 04:03:55.608733008 +0000 UTC m=+1690.272772327" observedRunningTime="2026-02-28 04:03:56.152383857 +0000 UTC m=+1690.816423186" watchObservedRunningTime="2026-02-28 04:03:56.163357443 +0000 UTC m=+1690.827396762" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.046400 4624 scope.go:117] "RemoveContainer" containerID="762a099e8b7e18c8c3787d6e8c5626c7d8012208125ba2b9927617a51234b0fc" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.082579 4624 scope.go:117] "RemoveContainer" containerID="221bc2ae7755b9ed632c86287255a658800fd418dd9882e35dd6aaf7b7306a6e" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.125871 4624 scope.go:117] "RemoveContainer" containerID="ed0a55553f3c7cdce2f0c3a52db41adef65adf38737cf21e64c38d8bcdd1f212" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.198647 4624 scope.go:117] "RemoveContainer" containerID="d29969dc544e2873b8a9ffc0fef4b068d3a3c84ebb01ebd161c67d0dbe18a33f" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.266308 4624 scope.go:117] "RemoveContainer" containerID="53af4ee73cd670875e9e6cfb1e59885d3f8010c3cb97f5c1231146db7cd97070" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.311463 4624 scope.go:117] "RemoveContainer" containerID="3446d5d34d71b6cfb9e5e2bfff7b699988b67697ac423f95a32a2d008ee10658" Feb 28 04:03:59 crc kubenswrapper[4624]: I0228 04:03:59.356589 4624 scope.go:117] "RemoveContainer" containerID="780a6a671c940e1f81fb13d333cfb796167a09f1c8228c2ced8dfd4747c9f664" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.159627 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537524-4rsbj"] Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.162616 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.171524 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.171675 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537524-4rsbj"] Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.171850 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.177834 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.304489 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh8jd\" (UniqueName: \"kubernetes.io/projected/56989800-b2b9-44f7-9dfb-c94eb5166870-kube-api-access-qh8jd\") pod \"auto-csr-approver-29537524-4rsbj\" (UID: \"56989800-b2b9-44f7-9dfb-c94eb5166870\") " pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.407344 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh8jd\" (UniqueName: \"kubernetes.io/projected/56989800-b2b9-44f7-9dfb-c94eb5166870-kube-api-access-qh8jd\") pod \"auto-csr-approver-29537524-4rsbj\" (UID: \"56989800-b2b9-44f7-9dfb-c94eb5166870\") " pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.445966 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh8jd\" (UniqueName: \"kubernetes.io/projected/56989800-b2b9-44f7-9dfb-c94eb5166870-kube-api-access-qh8jd\") pod \"auto-csr-approver-29537524-4rsbj\" (UID: \"56989800-b2b9-44f7-9dfb-c94eb5166870\") " pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.481416 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:00 crc kubenswrapper[4624]: I0228 04:04:00.977555 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537524-4rsbj"] Feb 28 04:04:01 crc kubenswrapper[4624]: I0228 04:04:01.214513 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" event={"ID":"56989800-b2b9-44f7-9dfb-c94eb5166870","Type":"ContainerStarted","Data":"8f4d76c4ae2b0181229514880d8f1a255a24a11200ce1a0590d58772426890e1"} Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.069673 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-dhvq6"] Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.104251 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xgkww"] Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.104299 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b63c-account-create-update-phz2m"] Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.110777 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-dhvq6"] Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.122828 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b63c-account-create-update-phz2m"] Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.135563 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xgkww"] Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.226385 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" event={"ID":"56989800-b2b9-44f7-9dfb-c94eb5166870","Type":"ContainerStarted","Data":"d24952b1d3ea42c764556d005dcf2dce66d4e577f5c14d3274cf9fa3f9f908a3"} Feb 28 04:04:02 crc kubenswrapper[4624]: I0228 04:04:02.261289 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" podStartSLOduration=1.380951661 podStartE2EDuration="2.261238351s" podCreationTimestamp="2026-02-28 04:04:00 +0000 UTC" firstStartedPulling="2026-02-28 04:04:00.978482489 +0000 UTC m=+1695.642521798" lastFinishedPulling="2026-02-28 04:04:01.858769179 +0000 UTC m=+1696.522808488" observedRunningTime="2026-02-28 04:04:02.247836238 +0000 UTC m=+1696.911875547" watchObservedRunningTime="2026-02-28 04:04:02.261238351 +0000 UTC m=+1696.925277670" Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.042397 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mcnvn"] Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.052287 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ca7d-account-create-update-zwcn8"] Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.062397 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78ef-account-create-update-jgbrw"] Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.072204 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-78ef-account-create-update-jgbrw"] Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.081349 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mcnvn"] Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.089183 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ca7d-account-create-update-zwcn8"] Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.256892 4624 generic.go:334] "Generic (PLEG): container finished" podID="56989800-b2b9-44f7-9dfb-c94eb5166870" containerID="d24952b1d3ea42c764556d005dcf2dce66d4e577f5c14d3274cf9fa3f9f908a3" exitCode=0 Feb 28 04:04:03 crc kubenswrapper[4624]: I0228 04:04:03.256953 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" event={"ID":"56989800-b2b9-44f7-9dfb-c94eb5166870","Type":"ContainerDied","Data":"d24952b1d3ea42c764556d005dcf2dce66d4e577f5c14d3274cf9fa3f9f908a3"} Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.088712 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:04:04 crc kubenswrapper[4624]: E0228 04:04:04.089268 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.101017 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26804f93-99d0-4652-bdc9-10c3283cbd57" path="/var/lib/kubelet/pods/26804f93-99d0-4652-bdc9-10c3283cbd57/volumes" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.101936 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dbf943a-6678-4854-8bba-6d319f22b039" path="/var/lib/kubelet/pods/3dbf943a-6678-4854-8bba-6d319f22b039/volumes" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.102744 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941df155-1f41-44a9-bdf8-80a7e7864a2e" path="/var/lib/kubelet/pods/941df155-1f41-44a9-bdf8-80a7e7864a2e/volumes" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.103534 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac85b036-a204-4026-841a-e4d3f91841ef" path="/var/lib/kubelet/pods/ac85b036-a204-4026-841a-e4d3f91841ef/volumes" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.105681 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6787397-0210-4076-90cd-039c9dae6dcb" path="/var/lib/kubelet/pods/b6787397-0210-4076-90cd-039c9dae6dcb/volumes" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.106743 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfdda2ff-5aea-4055-9c24-10ae333a0681" path="/var/lib/kubelet/pods/cfdda2ff-5aea-4055-9c24-10ae333a0681/volumes" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.670141 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.844667 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh8jd\" (UniqueName: \"kubernetes.io/projected/56989800-b2b9-44f7-9dfb-c94eb5166870-kube-api-access-qh8jd\") pod \"56989800-b2b9-44f7-9dfb-c94eb5166870\" (UID: \"56989800-b2b9-44f7-9dfb-c94eb5166870\") " Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.852849 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56989800-b2b9-44f7-9dfb-c94eb5166870-kube-api-access-qh8jd" (OuterVolumeSpecName: "kube-api-access-qh8jd") pod "56989800-b2b9-44f7-9dfb-c94eb5166870" (UID: "56989800-b2b9-44f7-9dfb-c94eb5166870"). InnerVolumeSpecName "kube-api-access-qh8jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:04:04 crc kubenswrapper[4624]: I0228 04:04:04.946451 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh8jd\" (UniqueName: \"kubernetes.io/projected/56989800-b2b9-44f7-9dfb-c94eb5166870-kube-api-access-qh8jd\") on node \"crc\" DevicePath \"\"" Feb 28 04:04:05 crc kubenswrapper[4624]: I0228 04:04:05.299707 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" event={"ID":"56989800-b2b9-44f7-9dfb-c94eb5166870","Type":"ContainerDied","Data":"8f4d76c4ae2b0181229514880d8f1a255a24a11200ce1a0590d58772426890e1"} Feb 28 04:04:05 crc kubenswrapper[4624]: I0228 04:04:05.299787 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4d76c4ae2b0181229514880d8f1a255a24a11200ce1a0590d58772426890e1" Feb 28 04:04:05 crc kubenswrapper[4624]: I0228 04:04:05.299898 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537524-4rsbj" Feb 28 04:04:05 crc kubenswrapper[4624]: I0228 04:04:05.347197 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537518-ntcqp"] Feb 28 04:04:05 crc kubenswrapper[4624]: I0228 04:04:05.366133 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537518-ntcqp"] Feb 28 04:04:06 crc kubenswrapper[4624]: I0228 04:04:06.115978 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1a92b2-aaa8-4b8c-a947-47561d583f80" path="/var/lib/kubelet/pods/ef1a92b2-aaa8-4b8c-a947-47561d583f80/volumes" Feb 28 04:04:13 crc kubenswrapper[4624]: I0228 04:04:13.042484 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8xzgn"] Feb 28 04:04:13 crc kubenswrapper[4624]: I0228 04:04:13.057445 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8xzgn"] Feb 28 04:04:14 crc kubenswrapper[4624]: I0228 04:04:14.111755 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad93dbf2-c574-4701-8d8c-14f597593ea1" path="/var/lib/kubelet/pods/ad93dbf2-c574-4701-8d8c-14f597593ea1/volumes" Feb 28 04:04:19 crc kubenswrapper[4624]: I0228 04:04:19.087259 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:04:19 crc kubenswrapper[4624]: E0228 04:04:19.088042 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:04:32 crc kubenswrapper[4624]: I0228 04:04:32.093523 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:04:32 crc kubenswrapper[4624]: E0228 04:04:32.095139 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:04:33 crc kubenswrapper[4624]: I0228 04:04:33.063830 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jmxqw"] Feb 28 04:04:33 crc kubenswrapper[4624]: I0228 04:04:33.075269 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jmxqw"] Feb 28 04:04:34 crc kubenswrapper[4624]: I0228 04:04:34.134192 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e987d56b-dcae-4f73-8e96-9010674f3c4e" path="/var/lib/kubelet/pods/e987d56b-dcae-4f73-8e96-9010674f3c4e/volumes" Feb 28 04:04:47 crc kubenswrapper[4624]: I0228 04:04:47.088715 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:04:47 crc kubenswrapper[4624]: E0228 04:04:47.089929 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:04:54 crc kubenswrapper[4624]: I0228 04:04:54.073044 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vrwnn"] Feb 28 04:04:54 crc kubenswrapper[4624]: I0228 04:04:54.085788 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vrwnn"] Feb 28 04:04:54 crc kubenswrapper[4624]: I0228 04:04:54.107675 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1928c8-43f4-46a7-997e-baa034bb94d8" path="/var/lib/kubelet/pods/fb1928c8-43f4-46a7-997e-baa034bb94d8/volumes" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.540625 4624 scope.go:117] "RemoveContainer" containerID="f6ce4a2c70a6813aff13e07dc29f496b6b87405812f7cf36e2d1098ff78481c3" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.582848 4624 scope.go:117] "RemoveContainer" containerID="b7f9fe2ce98825cbbf96567f38a06779ba6c414aaf8cbe52b7e65b73fbf1759d" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.620967 4624 scope.go:117] "RemoveContainer" containerID="db1d9dd760ea9fcc53b414db2fbf588554a9a23988976b737e538146f445edd8" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.672240 4624 scope.go:117] "RemoveContainer" containerID="44dd7334941203acec54576509f8450f7c271f99fc620caa6798bd650cbdecac" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.716465 4624 scope.go:117] "RemoveContainer" containerID="1fd0de55768ac8f25e420fee148da3d9b4231629c69a3ba5a906ee755cecb368" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.762876 4624 scope.go:117] "RemoveContainer" containerID="2a6c019d60640347946bf734a57b95a9fd5451cc4662dd20bae8810144016b35" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.823748 4624 scope.go:117] "RemoveContainer" containerID="283cdc928f65f1a37d0ed11b11f82ca4c86a1c24efef9adf112df1673cd556bd" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.878845 4624 scope.go:117] "RemoveContainer" containerID="e97c5c18ff52b1797a63167ae6ad7db176c84b98624dbb75c84232636f181179" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.919819 4624 scope.go:117] "RemoveContainer" containerID="8bea4d2098de4150a4cf2e08f339d2cfcc89f8d343586c83680d6fd75513239a" Feb 28 04:04:59 crc kubenswrapper[4624]: I0228 04:04:59.947756 4624 scope.go:117] "RemoveContainer" containerID="c5b66eff707641ba64fd2231b152e2ea08b558cce285a77ac32fc2f7c4724472" Feb 28 04:05:00 crc kubenswrapper[4624]: I0228 04:05:00.088187 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:05:00 crc kubenswrapper[4624]: E0228 04:05:00.088934 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:05:05 crc kubenswrapper[4624]: I0228 04:05:05.058853 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7zvf6"] Feb 28 04:05:05 crc kubenswrapper[4624]: I0228 04:05:05.070270 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7zvf6"] Feb 28 04:05:06 crc kubenswrapper[4624]: I0228 04:05:06.101350 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdc81234-8d71-4da8-821f-62f79823de92" path="/var/lib/kubelet/pods/cdc81234-8d71-4da8-821f-62f79823de92/volumes" Feb 28 04:05:15 crc kubenswrapper[4624]: I0228 04:05:15.106760 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:05:15 crc kubenswrapper[4624]: E0228 04:05:15.107889 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:05:16 crc kubenswrapper[4624]: I0228 04:05:16.044099 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l9l75"] Feb 28 04:05:16 crc kubenswrapper[4624]: I0228 04:05:16.055547 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mkzf4"] Feb 28 04:05:16 crc kubenswrapper[4624]: I0228 04:05:16.070049 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mkzf4"] Feb 28 04:05:16 crc kubenswrapper[4624]: I0228 04:05:16.078763 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l9l75"] Feb 28 04:05:16 crc kubenswrapper[4624]: I0228 04:05:16.101938 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12eae8a2-7f1a-447e-afbc-30bc3760f6df" path="/var/lib/kubelet/pods/12eae8a2-7f1a-447e-afbc-30bc3760f6df/volumes" Feb 28 04:05:16 crc kubenswrapper[4624]: I0228 04:05:16.102874 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b0ae2c-3cbb-419b-8214-739eea04c9a4" path="/var/lib/kubelet/pods/34b0ae2c-3cbb-419b-8214-739eea04c9a4/volumes" Feb 28 04:05:26 crc kubenswrapper[4624]: I0228 04:05:26.089123 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:05:26 crc kubenswrapper[4624]: E0228 04:05:26.090303 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:05:34 crc kubenswrapper[4624]: I0228 04:05:34.059921 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8tlzl"] Feb 28 04:05:34 crc kubenswrapper[4624]: I0228 04:05:34.073600 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8tlzl"] Feb 28 04:05:34 crc kubenswrapper[4624]: I0228 04:05:34.099488 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d169c37-cd26-4e66-8f96-d0a53a96d616" path="/var/lib/kubelet/pods/0d169c37-cd26-4e66-8f96-d0a53a96d616/volumes" Feb 28 04:05:37 crc kubenswrapper[4624]: I0228 04:05:37.087932 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:05:37 crc kubenswrapper[4624]: E0228 04:05:37.088596 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:05:52 crc kubenswrapper[4624]: I0228 04:05:52.087791 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:05:52 crc kubenswrapper[4624]: E0228 04:05:52.088949 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.157072 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537526-llbnl"] Feb 28 04:06:00 crc kubenswrapper[4624]: E0228 04:06:00.158490 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56989800-b2b9-44f7-9dfb-c94eb5166870" containerName="oc" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.158515 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="56989800-b2b9-44f7-9dfb-c94eb5166870" containerName="oc" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.158836 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="56989800-b2b9-44f7-9dfb-c94eb5166870" containerName="oc" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.159871 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.162669 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.162879 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.164334 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.169544 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537526-llbnl"] Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.220161 4624 scope.go:117] "RemoveContainer" containerID="a09c59dbca07f1f63d63b0fe0b2e5b756fbbbfc13e082ea108ff7d2581f2ec01" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.275060 4624 scope.go:117] "RemoveContainer" containerID="9b0af6eb06910579d04fbc8d93136e0834c1eb7a82693d13312e6477d27651f1" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.314891 4624 scope.go:117] "RemoveContainer" containerID="f5d0ebf42cb55ccfef7a1d5efc023a3d26f7b91a7f65e9a75be2ca942b7f11e2" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.336041 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6lp2\" (UniqueName: \"kubernetes.io/projected/486f976c-5601-4fa6-a076-f1c064661903-kube-api-access-h6lp2\") pod \"auto-csr-approver-29537526-llbnl\" (UID: \"486f976c-5601-4fa6-a076-f1c064661903\") " pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.390304 4624 scope.go:117] "RemoveContainer" containerID="32dbac6a5ac9e1eb0c01d8030240c36e5ebd1d139f4bc5b4feb71e4a28f1bc46" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.437775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6lp2\" (UniqueName: \"kubernetes.io/projected/486f976c-5601-4fa6-a076-f1c064661903-kube-api-access-h6lp2\") pod \"auto-csr-approver-29537526-llbnl\" (UID: \"486f976c-5601-4fa6-a076-f1c064661903\") " pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.461187 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6lp2\" (UniqueName: \"kubernetes.io/projected/486f976c-5601-4fa6-a076-f1c064661903-kube-api-access-h6lp2\") pod \"auto-csr-approver-29537526-llbnl\" (UID: \"486f976c-5601-4fa6-a076-f1c064661903\") " pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.531147 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.997263 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537526-llbnl"] Feb 28 04:06:00 crc kubenswrapper[4624]: I0228 04:06:00.998904 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:06:01 crc kubenswrapper[4624]: I0228 04:06:01.814801 4624 generic.go:334] "Generic (PLEG): container finished" podID="23fb1205-74ef-497d-bbd0-10fff39c6a4a" containerID="17450d3164172ca8c06f184f62e22f9f6aae358de405eb382d9c7296b15e6ec1" exitCode=0 Feb 28 04:06:01 crc kubenswrapper[4624]: I0228 04:06:01.814865 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" event={"ID":"23fb1205-74ef-497d-bbd0-10fff39c6a4a","Type":"ContainerDied","Data":"17450d3164172ca8c06f184f62e22f9f6aae358de405eb382d9c7296b15e6ec1"} Feb 28 04:06:01 crc kubenswrapper[4624]: I0228 04:06:01.816590 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537526-llbnl" event={"ID":"486f976c-5601-4fa6-a076-f1c064661903","Type":"ContainerStarted","Data":"553922f11b0a958ce64e8d125ddacd17d3887c8c784b1b13b0a1371fb1c76f76"} Feb 28 04:06:02 crc kubenswrapper[4624]: I0228 04:06:02.830034 4624 generic.go:334] "Generic (PLEG): container finished" podID="486f976c-5601-4fa6-a076-f1c064661903" containerID="2c1d5a2a5930c0cfaf8c052177e691590858fbd88cd70e73e9c983229854d57d" exitCode=0 Feb 28 04:06:02 crc kubenswrapper[4624]: I0228 04:06:02.830114 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537526-llbnl" event={"ID":"486f976c-5601-4fa6-a076-f1c064661903","Type":"ContainerDied","Data":"2c1d5a2a5930c0cfaf8c052177e691590858fbd88cd70e73e9c983229854d57d"} Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.323593 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.508915 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-inventory\") pod \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.509050 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-ssh-key-openstack-edpm-ipam\") pod \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.509292 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vr9l\" (UniqueName: \"kubernetes.io/projected/23fb1205-74ef-497d-bbd0-10fff39c6a4a-kube-api-access-2vr9l\") pod \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\" (UID: \"23fb1205-74ef-497d-bbd0-10fff39c6a4a\") " Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.521056 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23fb1205-74ef-497d-bbd0-10fff39c6a4a-kube-api-access-2vr9l" (OuterVolumeSpecName: "kube-api-access-2vr9l") pod "23fb1205-74ef-497d-bbd0-10fff39c6a4a" (UID: "23fb1205-74ef-497d-bbd0-10fff39c6a4a"). InnerVolumeSpecName "kube-api-access-2vr9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.541659 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-inventory" (OuterVolumeSpecName: "inventory") pod "23fb1205-74ef-497d-bbd0-10fff39c6a4a" (UID: "23fb1205-74ef-497d-bbd0-10fff39c6a4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.558418 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23fb1205-74ef-497d-bbd0-10fff39c6a4a" (UID: "23fb1205-74ef-497d-bbd0-10fff39c6a4a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.612883 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.612927 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23fb1205-74ef-497d-bbd0-10fff39c6a4a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.612944 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vr9l\" (UniqueName: \"kubernetes.io/projected/23fb1205-74ef-497d-bbd0-10fff39c6a4a-kube-api-access-2vr9l\") on node \"crc\" DevicePath \"\"" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.845440 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" event={"ID":"23fb1205-74ef-497d-bbd0-10fff39c6a4a","Type":"ContainerDied","Data":"160216a3cbda9f232cbe1789be040fcaefaec161d0ada78fb41823d8b93884be"} Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.845495 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jnldp" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.845499 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="160216a3cbda9f232cbe1789be040fcaefaec161d0ada78fb41823d8b93884be" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.978263 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb"] Feb 28 04:06:03 crc kubenswrapper[4624]: E0228 04:06:03.979507 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fb1205-74ef-497d-bbd0-10fff39c6a4a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.979535 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fb1205-74ef-497d-bbd0-10fff39c6a4a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.980184 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="23fb1205-74ef-497d-bbd0-10fff39c6a4a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.981310 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.986448 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.986976 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.987295 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:06:03 crc kubenswrapper[4624]: I0228 04:06:03.988007 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.000595 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb"] Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.131240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.132141 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.132222 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q864l\" (UniqueName: \"kubernetes.io/projected/88fcba71-7eeb-4780-88f3-3d751230eb2a-kube-api-access-q864l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.234585 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.239176 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.240467 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.240557 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q864l\" (UniqueName: \"kubernetes.io/projected/88fcba71-7eeb-4780-88f3-3d751230eb2a-kube-api-access-q864l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.243864 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.260221 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q864l\" (UniqueName: \"kubernetes.io/projected/88fcba71-7eeb-4780-88f3-3d751230eb2a-kube-api-access-q864l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.313578 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.448960 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.550258 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6lp2\" (UniqueName: \"kubernetes.io/projected/486f976c-5601-4fa6-a076-f1c064661903-kube-api-access-h6lp2\") pod \"486f976c-5601-4fa6-a076-f1c064661903\" (UID: \"486f976c-5601-4fa6-a076-f1c064661903\") " Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.557880 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486f976c-5601-4fa6-a076-f1c064661903-kube-api-access-h6lp2" (OuterVolumeSpecName: "kube-api-access-h6lp2") pod "486f976c-5601-4fa6-a076-f1c064661903" (UID: "486f976c-5601-4fa6-a076-f1c064661903"). InnerVolumeSpecName "kube-api-access-h6lp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.653068 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6lp2\" (UniqueName: \"kubernetes.io/projected/486f976c-5601-4fa6-a076-f1c064661903-kube-api-access-h6lp2\") on node \"crc\" DevicePath \"\"" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.724103 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb"] Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.858819 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537526-llbnl" event={"ID":"486f976c-5601-4fa6-a076-f1c064661903","Type":"ContainerDied","Data":"553922f11b0a958ce64e8d125ddacd17d3887c8c784b1b13b0a1371fb1c76f76"} Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.858905 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553922f11b0a958ce64e8d125ddacd17d3887c8c784b1b13b0a1371fb1c76f76" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.858921 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537526-llbnl" Feb 28 04:06:04 crc kubenswrapper[4624]: I0228 04:06:04.861909 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" event={"ID":"88fcba71-7eeb-4780-88f3-3d751230eb2a","Type":"ContainerStarted","Data":"7876b587aff7672e10905e41eced395302e9842fee982e68dc76a7051e2fa1ab"} Feb 28 04:06:05 crc kubenswrapper[4624]: I0228 04:06:05.562495 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537520-zqh2r"] Feb 28 04:06:05 crc kubenswrapper[4624]: I0228 04:06:05.575139 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537520-zqh2r"] Feb 28 04:06:05 crc kubenswrapper[4624]: I0228 04:06:05.883791 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" event={"ID":"88fcba71-7eeb-4780-88f3-3d751230eb2a","Type":"ContainerStarted","Data":"9f052ec100515847bcd7d9c84ddd6e2f8f61d49a88eb1d52e90949daf989e117"} Feb 28 04:06:05 crc kubenswrapper[4624]: I0228 04:06:05.925069 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" podStartSLOduration=2.4652548149999998 podStartE2EDuration="2.925044936s" podCreationTimestamp="2026-02-28 04:06:03 +0000 UTC" firstStartedPulling="2026-02-28 04:06:04.73053583 +0000 UTC m=+1819.394575139" lastFinishedPulling="2026-02-28 04:06:05.190325921 +0000 UTC m=+1819.854365260" observedRunningTime="2026-02-28 04:06:05.916403063 +0000 UTC m=+1820.580442422" watchObservedRunningTime="2026-02-28 04:06:05.925044936 +0000 UTC m=+1820.589084245" Feb 28 04:06:06 crc kubenswrapper[4624]: I0228 04:06:06.102322 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2febede5-9921-477c-94f4-496013e7274a" path="/var/lib/kubelet/pods/2febede5-9921-477c-94f4-496013e7274a/volumes" Feb 28 04:06:07 crc kubenswrapper[4624]: I0228 04:06:07.087287 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:06:07 crc kubenswrapper[4624]: E0228 04:06:07.087988 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:06:19 crc kubenswrapper[4624]: I0228 04:06:19.061572 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-17d0-account-create-update-gv99f"] Feb 28 04:06:19 crc kubenswrapper[4624]: I0228 04:06:19.079033 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-17d0-account-create-update-gv99f"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.052239 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9cfa-account-create-update-w9qh7"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.073324 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9cfa-account-create-update-w9qh7"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.101901 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee730e9-677d-4ae6-b242-dbdaee2e0ecc" path="/var/lib/kubelet/pods/1ee730e9-677d-4ae6-b242-dbdaee2e0ecc/volumes" Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.102690 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3628d61-3ed3-4dc8-b649-9748be42d073" path="/var/lib/kubelet/pods/d3628d61-3ed3-4dc8-b649-9748be42d073/volumes" Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.104213 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-t8gp6"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.104251 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-024b-account-create-update-gknfh"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.110404 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qkq46"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.121807 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-t8gp6"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.135481 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mj64h"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.143421 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qkq46"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.151148 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-024b-account-create-update-gknfh"] Feb 28 04:06:20 crc kubenswrapper[4624]: I0228 04:06:20.173937 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mj64h"] Feb 28 04:06:21 crc kubenswrapper[4624]: I0228 04:06:21.089368 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:06:21 crc kubenswrapper[4624]: E0228 04:06:21.089615 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:06:22 crc kubenswrapper[4624]: I0228 04:06:22.107318 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5443a5-da1f-4cf6-a6a1-bd562c45f257" path="/var/lib/kubelet/pods/3d5443a5-da1f-4cf6-a6a1-bd562c45f257/volumes" Feb 28 04:06:22 crc kubenswrapper[4624]: I0228 04:06:22.108626 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5072950d-2ee4-439c-ade4-63802cc55a48" path="/var/lib/kubelet/pods/5072950d-2ee4-439c-ade4-63802cc55a48/volumes" Feb 28 04:06:22 crc kubenswrapper[4624]: I0228 04:06:22.109501 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5506c57e-14c0-4fca-88ba-09db2ac80047" path="/var/lib/kubelet/pods/5506c57e-14c0-4fca-88ba-09db2ac80047/volumes" Feb 28 04:06:22 crc kubenswrapper[4624]: I0228 04:06:22.110483 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f46a0e-807d-4856-bb08-878dc3d19728" path="/var/lib/kubelet/pods/59f46a0e-807d-4856-bb08-878dc3d19728/volumes" Feb 28 04:06:36 crc kubenswrapper[4624]: I0228 04:06:36.094065 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:06:36 crc kubenswrapper[4624]: E0228 04:06:36.097634 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:06:47 crc kubenswrapper[4624]: I0228 04:06:47.088445 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:06:47 crc kubenswrapper[4624]: E0228 04:06:47.089705 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.520301 4624 scope.go:117] "RemoveContainer" containerID="c56e444347195ecf2666edcb080aa8735b0460ea2cfb76672fb393b0ae323d0e" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.551992 4624 scope.go:117] "RemoveContainer" containerID="c310ac6a435228634ccfe630475de36378b07a86ec7bc61749739d1eb23b6d1e" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.616124 4624 scope.go:117] "RemoveContainer" containerID="b0bd73a49e8ae6a04c11d686b555c198249589bd0fcb298a68c9e8d35acae9ea" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.672148 4624 scope.go:117] "RemoveContainer" containerID="4c9457d9820529b5517b6941d02d2457ed943e36538bfc1766662590702b56e5" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.720203 4624 scope.go:117] "RemoveContainer" containerID="b2d7f35ed6731aad08f0ae5940ec1e8afed45565b8e3efedb47b1710b1804fea" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.769201 4624 scope.go:117] "RemoveContainer" containerID="6d33b8ba5be7135b759fe37fd36c46342563b111642d63c5436209096d5df6b0" Feb 28 04:07:00 crc kubenswrapper[4624]: I0228 04:07:00.821717 4624 scope.go:117] "RemoveContainer" containerID="0d183760a036fd1efb99688b4e26f3e5c0b51648fd34421c97f9c910e5084f94" Feb 28 04:07:01 crc kubenswrapper[4624]: I0228 04:07:01.087492 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:07:01 crc kubenswrapper[4624]: E0228 04:07:01.087989 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:07:10 crc kubenswrapper[4624]: I0228 04:07:10.048063 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fr975"] Feb 28 04:07:10 crc kubenswrapper[4624]: I0228 04:07:10.077717 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fr975"] Feb 28 04:07:10 crc kubenswrapper[4624]: I0228 04:07:10.104987 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46302b23-1f0a-4e63-948a-bcc402ca3dc1" path="/var/lib/kubelet/pods/46302b23-1f0a-4e63-948a-bcc402ca3dc1/volumes" Feb 28 04:07:12 crc kubenswrapper[4624]: I0228 04:07:12.087824 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:07:12 crc kubenswrapper[4624]: E0228 04:07:12.089241 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:07:15 crc kubenswrapper[4624]: I0228 04:07:15.683510 4624 generic.go:334] "Generic (PLEG): container finished" podID="88fcba71-7eeb-4780-88f3-3d751230eb2a" containerID="9f052ec100515847bcd7d9c84ddd6e2f8f61d49a88eb1d52e90949daf989e117" exitCode=0 Feb 28 04:07:15 crc kubenswrapper[4624]: I0228 04:07:15.683576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" event={"ID":"88fcba71-7eeb-4780-88f3-3d751230eb2a","Type":"ContainerDied","Data":"9f052ec100515847bcd7d9c84ddd6e2f8f61d49a88eb1d52e90949daf989e117"} Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.157152 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.349812 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-inventory\") pod \"88fcba71-7eeb-4780-88f3-3d751230eb2a\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.350292 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q864l\" (UniqueName: \"kubernetes.io/projected/88fcba71-7eeb-4780-88f3-3d751230eb2a-kube-api-access-q864l\") pod \"88fcba71-7eeb-4780-88f3-3d751230eb2a\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.350848 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-ssh-key-openstack-edpm-ipam\") pod \"88fcba71-7eeb-4780-88f3-3d751230eb2a\" (UID: \"88fcba71-7eeb-4780-88f3-3d751230eb2a\") " Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.360820 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fcba71-7eeb-4780-88f3-3d751230eb2a-kube-api-access-q864l" (OuterVolumeSpecName: "kube-api-access-q864l") pod "88fcba71-7eeb-4780-88f3-3d751230eb2a" (UID: "88fcba71-7eeb-4780-88f3-3d751230eb2a"). InnerVolumeSpecName "kube-api-access-q864l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.381928 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "88fcba71-7eeb-4780-88f3-3d751230eb2a" (UID: "88fcba71-7eeb-4780-88f3-3d751230eb2a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.388546 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-inventory" (OuterVolumeSpecName: "inventory") pod "88fcba71-7eeb-4780-88f3-3d751230eb2a" (UID: "88fcba71-7eeb-4780-88f3-3d751230eb2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.453708 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.453747 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fcba71-7eeb-4780-88f3-3d751230eb2a-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.453760 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q864l\" (UniqueName: \"kubernetes.io/projected/88fcba71-7eeb-4780-88f3-3d751230eb2a-kube-api-access-q864l\") on node \"crc\" DevicePath \"\"" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.705560 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" event={"ID":"88fcba71-7eeb-4780-88f3-3d751230eb2a","Type":"ContainerDied","Data":"7876b587aff7672e10905e41eced395302e9842fee982e68dc76a7051e2fa1ab"} Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.705873 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7876b587aff7672e10905e41eced395302e9842fee982e68dc76a7051e2fa1ab" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.705800 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.804800 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2"] Feb 28 04:07:17 crc kubenswrapper[4624]: E0228 04:07:17.805308 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fcba71-7eeb-4780-88f3-3d751230eb2a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.805335 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fcba71-7eeb-4780-88f3-3d751230eb2a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 04:07:17 crc kubenswrapper[4624]: E0228 04:07:17.805374 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486f976c-5601-4fa6-a076-f1c064661903" containerName="oc" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.805384 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="486f976c-5601-4fa6-a076-f1c064661903" containerName="oc" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.805658 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fcba71-7eeb-4780-88f3-3d751230eb2a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.805687 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="486f976c-5601-4fa6-a076-f1c064661903" containerName="oc" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.806413 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.813942 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.814066 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.814115 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.822865 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.827801 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2"] Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.964489 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45hwl\" (UniqueName: \"kubernetes.io/projected/b801953f-c310-4623-ad3e-69dc84bc9a34-kube-api-access-45hwl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.964847 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:17 crc kubenswrapper[4624]: I0228 04:07:17.964988 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.066852 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.066986 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.067074 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45hwl\" (UniqueName: \"kubernetes.io/projected/b801953f-c310-4623-ad3e-69dc84bc9a34-kube-api-access-45hwl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.073587 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.074023 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.098933 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45hwl\" (UniqueName: \"kubernetes.io/projected/b801953f-c310-4623-ad3e-69dc84bc9a34-kube-api-access-45hwl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-92tp2\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.129949 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:18 crc kubenswrapper[4624]: I0228 04:07:18.708600 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2"] Feb 28 04:07:19 crc kubenswrapper[4624]: I0228 04:07:19.727201 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" event={"ID":"b801953f-c310-4623-ad3e-69dc84bc9a34","Type":"ContainerStarted","Data":"df52124bde3a03ee898b3ac9cf9a3c77592298db72d83a4647ee9474ef36e6c4"} Feb 28 04:07:19 crc kubenswrapper[4624]: I0228 04:07:19.729594 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" event={"ID":"b801953f-c310-4623-ad3e-69dc84bc9a34","Type":"ContainerStarted","Data":"3fd5652c5aceb66a0e1175e477c2c9662a314db06d7c68d2ba6692bcf89e4226"} Feb 28 04:07:19 crc kubenswrapper[4624]: I0228 04:07:19.750149 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" podStartSLOduration=2.23665315 podStartE2EDuration="2.750129572s" podCreationTimestamp="2026-02-28 04:07:17 +0000 UTC" firstStartedPulling="2026-02-28 04:07:18.722375995 +0000 UTC m=+1893.386415294" lastFinishedPulling="2026-02-28 04:07:19.235852407 +0000 UTC m=+1893.899891716" observedRunningTime="2026-02-28 04:07:19.747224913 +0000 UTC m=+1894.411264222" watchObservedRunningTime="2026-02-28 04:07:19.750129572 +0000 UTC m=+1894.414168881" Feb 28 04:07:24 crc kubenswrapper[4624]: I0228 04:07:24.777553 4624 generic.go:334] "Generic (PLEG): container finished" podID="b801953f-c310-4623-ad3e-69dc84bc9a34" containerID="df52124bde3a03ee898b3ac9cf9a3c77592298db72d83a4647ee9474ef36e6c4" exitCode=0 Feb 28 04:07:24 crc kubenswrapper[4624]: I0228 04:07:24.777659 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" event={"ID":"b801953f-c310-4623-ad3e-69dc84bc9a34","Type":"ContainerDied","Data":"df52124bde3a03ee898b3ac9cf9a3c77592298db72d83a4647ee9474ef36e6c4"} Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.098621 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.294379 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.453821 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45hwl\" (UniqueName: \"kubernetes.io/projected/b801953f-c310-4623-ad3e-69dc84bc9a34-kube-api-access-45hwl\") pod \"b801953f-c310-4623-ad3e-69dc84bc9a34\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.453963 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-ssh-key-openstack-edpm-ipam\") pod \"b801953f-c310-4623-ad3e-69dc84bc9a34\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.454220 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-inventory\") pod \"b801953f-c310-4623-ad3e-69dc84bc9a34\" (UID: \"b801953f-c310-4623-ad3e-69dc84bc9a34\") " Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.463378 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b801953f-c310-4623-ad3e-69dc84bc9a34-kube-api-access-45hwl" (OuterVolumeSpecName: "kube-api-access-45hwl") pod "b801953f-c310-4623-ad3e-69dc84bc9a34" (UID: "b801953f-c310-4623-ad3e-69dc84bc9a34"). InnerVolumeSpecName "kube-api-access-45hwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.486990 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b801953f-c310-4623-ad3e-69dc84bc9a34" (UID: "b801953f-c310-4623-ad3e-69dc84bc9a34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.491034 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-inventory" (OuterVolumeSpecName: "inventory") pod "b801953f-c310-4623-ad3e-69dc84bc9a34" (UID: "b801953f-c310-4623-ad3e-69dc84bc9a34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.557542 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.557595 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801953f-c310-4623-ad3e-69dc84bc9a34-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.557614 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45hwl\" (UniqueName: \"kubernetes.io/projected/b801953f-c310-4623-ad3e-69dc84bc9a34-kube-api-access-45hwl\") on node \"crc\" DevicePath \"\"" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.807162 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"5f66582844ab2a7a20d79ed088627297535eafe4e568aa67fdb624fc0d6a1269"} Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.809334 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" event={"ID":"b801953f-c310-4623-ad3e-69dc84bc9a34","Type":"ContainerDied","Data":"3fd5652c5aceb66a0e1175e477c2c9662a314db06d7c68d2ba6692bcf89e4226"} Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.809369 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd5652c5aceb66a0e1175e477c2c9662a314db06d7c68d2ba6692bcf89e4226" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.809372 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-92tp2" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.926145 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg"] Feb 28 04:07:26 crc kubenswrapper[4624]: E0228 04:07:26.926619 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b801953f-c310-4623-ad3e-69dc84bc9a34" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.926640 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b801953f-c310-4623-ad3e-69dc84bc9a34" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.926829 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b801953f-c310-4623-ad3e-69dc84bc9a34" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.927702 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.929811 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.932662 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.932793 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.953307 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:07:26 crc kubenswrapper[4624]: I0228 04:07:26.961287 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg"] Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.068264 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmqb\" (UniqueName: \"kubernetes.io/projected/9d24e266-6648-42ed-a44e-0b37c5e974a0-kube-api-access-4zmqb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.068415 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.068709 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.171120 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.171271 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.171330 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmqb\" (UniqueName: \"kubernetes.io/projected/9d24e266-6648-42ed-a44e-0b37c5e974a0-kube-api-access-4zmqb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.184010 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.191607 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.195402 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmqb\" (UniqueName: \"kubernetes.io/projected/9d24e266-6648-42ed-a44e-0b37c5e974a0-kube-api-access-4zmqb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9bfg\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.250366 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:07:27 crc kubenswrapper[4624]: I0228 04:07:27.879396 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg"] Feb 28 04:07:28 crc kubenswrapper[4624]: I0228 04:07:28.835352 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" event={"ID":"9d24e266-6648-42ed-a44e-0b37c5e974a0","Type":"ContainerStarted","Data":"187a086b945cd1b190a3ba3928d9ab828a119c3a708047fcf99cddc37c76e3e7"} Feb 28 04:07:28 crc kubenswrapper[4624]: I0228 04:07:28.836297 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" event={"ID":"9d24e266-6648-42ed-a44e-0b37c5e974a0","Type":"ContainerStarted","Data":"b5e69d118fa6c75a9786e4894117246e32980f4ba8f9bb641b6fded34eb7e927"} Feb 28 04:07:28 crc kubenswrapper[4624]: I0228 04:07:28.875638 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" podStartSLOduration=2.469480525 podStartE2EDuration="2.875612226s" podCreationTimestamp="2026-02-28 04:07:26 +0000 UTC" firstStartedPulling="2026-02-28 04:07:27.889172036 +0000 UTC m=+1902.553211345" lastFinishedPulling="2026-02-28 04:07:28.295303737 +0000 UTC m=+1902.959343046" observedRunningTime="2026-02-28 04:07:28.859730507 +0000 UTC m=+1903.523769846" watchObservedRunningTime="2026-02-28 04:07:28.875612226 +0000 UTC m=+1903.539651535" Feb 28 04:07:40 crc kubenswrapper[4624]: I0228 04:07:40.065435 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-59ccl"] Feb 28 04:07:40 crc kubenswrapper[4624]: I0228 04:07:40.076576 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-59ccl"] Feb 28 04:07:40 crc kubenswrapper[4624]: I0228 04:07:40.104620 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46fc9ec-81fe-446c-8592-9fcb0802aeb0" path="/var/lib/kubelet/pods/e46fc9ec-81fe-446c-8592-9fcb0802aeb0/volumes" Feb 28 04:07:44 crc kubenswrapper[4624]: I0228 04:07:44.036013 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxv6f"] Feb 28 04:07:44 crc kubenswrapper[4624]: I0228 04:07:44.046592 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxv6f"] Feb 28 04:07:44 crc kubenswrapper[4624]: I0228 04:07:44.100073 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedb25a9-d021-48ae-82e9-cbf0dcba172f" path="/var/lib/kubelet/pods/dedb25a9-d021-48ae-82e9-cbf0dcba172f/volumes" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.153232 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537528-qrvbh"] Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.160257 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.165978 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.166275 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.168842 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.195077 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537528-qrvbh"] Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.319806 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qvk\" (UniqueName: \"kubernetes.io/projected/3336f912-c3b2-4483-856f-f93def7322ed-kube-api-access-t8qvk\") pod \"auto-csr-approver-29537528-qrvbh\" (UID: \"3336f912-c3b2-4483-856f-f93def7322ed\") " pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.422402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qvk\" (UniqueName: \"kubernetes.io/projected/3336f912-c3b2-4483-856f-f93def7322ed-kube-api-access-t8qvk\") pod \"auto-csr-approver-29537528-qrvbh\" (UID: \"3336f912-c3b2-4483-856f-f93def7322ed\") " pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.464670 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qvk\" (UniqueName: \"kubernetes.io/projected/3336f912-c3b2-4483-856f-f93def7322ed-kube-api-access-t8qvk\") pod \"auto-csr-approver-29537528-qrvbh\" (UID: \"3336f912-c3b2-4483-856f-f93def7322ed\") " pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.493111 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:00 crc kubenswrapper[4624]: I0228 04:08:00.800785 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537528-qrvbh"] Feb 28 04:08:01 crc kubenswrapper[4624]: I0228 04:08:01.003337 4624 scope.go:117] "RemoveContainer" containerID="5b3667bfebc2bb818b14ec3562a28b9829d94d6e4c9b9a51edb2d5c8a41b7538" Feb 28 04:08:01 crc kubenswrapper[4624]: I0228 04:08:01.047170 4624 scope.go:117] "RemoveContainer" containerID="04e0cdc7b27ada011157ecf57a8735e5c646c9fc5b2df62b897977b4f85e9c38" Feb 28 04:08:01 crc kubenswrapper[4624]: I0228 04:08:01.129680 4624 scope.go:117] "RemoveContainer" containerID="01da1f38ecf1123e77721a9acf0f2037630441a4a32b26ee600456fab409c75e" Feb 28 04:08:01 crc kubenswrapper[4624]: I0228 04:08:01.154447 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" event={"ID":"3336f912-c3b2-4483-856f-f93def7322ed","Type":"ContainerStarted","Data":"1150100561f6726758246f25b35106ff75896f74dbbcbb17d3f7445b10af8a9e"} Feb 28 04:08:02 crc kubenswrapper[4624]: I0228 04:08:02.169538 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" event={"ID":"3336f912-c3b2-4483-856f-f93def7322ed","Type":"ContainerStarted","Data":"16b37111f35e9a75c1492aeba62a53888af79806814f30fcb2f44a4607d3c1be"} Feb 28 04:08:02 crc kubenswrapper[4624]: I0228 04:08:02.194316 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" podStartSLOduration=1.2429824250000001 podStartE2EDuration="2.194288765s" podCreationTimestamp="2026-02-28 04:08:00 +0000 UTC" firstStartedPulling="2026-02-28 04:08:00.811320284 +0000 UTC m=+1935.475359583" lastFinishedPulling="2026-02-28 04:08:01.762626614 +0000 UTC m=+1936.426665923" observedRunningTime="2026-02-28 04:08:02.189095204 +0000 UTC m=+1936.853134513" watchObservedRunningTime="2026-02-28 04:08:02.194288765 +0000 UTC m=+1936.858328074" Feb 28 04:08:03 crc kubenswrapper[4624]: I0228 04:08:03.181962 4624 generic.go:334] "Generic (PLEG): container finished" podID="3336f912-c3b2-4483-856f-f93def7322ed" containerID="16b37111f35e9a75c1492aeba62a53888af79806814f30fcb2f44a4607d3c1be" exitCode=0 Feb 28 04:08:03 crc kubenswrapper[4624]: I0228 04:08:03.182043 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" event={"ID":"3336f912-c3b2-4483-856f-f93def7322ed","Type":"ContainerDied","Data":"16b37111f35e9a75c1492aeba62a53888af79806814f30fcb2f44a4607d3c1be"} Feb 28 04:08:04 crc kubenswrapper[4624]: I0228 04:08:04.560969 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:04 crc kubenswrapper[4624]: I0228 04:08:04.722394 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8qvk\" (UniqueName: \"kubernetes.io/projected/3336f912-c3b2-4483-856f-f93def7322ed-kube-api-access-t8qvk\") pod \"3336f912-c3b2-4483-856f-f93def7322ed\" (UID: \"3336f912-c3b2-4483-856f-f93def7322ed\") " Feb 28 04:08:04 crc kubenswrapper[4624]: I0228 04:08:04.729908 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3336f912-c3b2-4483-856f-f93def7322ed-kube-api-access-t8qvk" (OuterVolumeSpecName: "kube-api-access-t8qvk") pod "3336f912-c3b2-4483-856f-f93def7322ed" (UID: "3336f912-c3b2-4483-856f-f93def7322ed"). InnerVolumeSpecName "kube-api-access-t8qvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:08:04 crc kubenswrapper[4624]: I0228 04:08:04.825063 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8qvk\" (UniqueName: \"kubernetes.io/projected/3336f912-c3b2-4483-856f-f93def7322ed-kube-api-access-t8qvk\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:05 crc kubenswrapper[4624]: I0228 04:08:05.200376 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" event={"ID":"3336f912-c3b2-4483-856f-f93def7322ed","Type":"ContainerDied","Data":"1150100561f6726758246f25b35106ff75896f74dbbcbb17d3f7445b10af8a9e"} Feb 28 04:08:05 crc kubenswrapper[4624]: I0228 04:08:05.200431 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1150100561f6726758246f25b35106ff75896f74dbbcbb17d3f7445b10af8a9e" Feb 28 04:08:05 crc kubenswrapper[4624]: I0228 04:08:05.200456 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537528-qrvbh" Feb 28 04:08:05 crc kubenswrapper[4624]: I0228 04:08:05.279511 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537522-zcrkb"] Feb 28 04:08:05 crc kubenswrapper[4624]: I0228 04:08:05.289328 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537522-zcrkb"] Feb 28 04:08:06 crc kubenswrapper[4624]: I0228 04:08:06.105623 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b5e045-6018-480a-9615-d545e8d4921d" path="/var/lib/kubelet/pods/41b5e045-6018-480a-9615-d545e8d4921d/volumes" Feb 28 04:08:10 crc kubenswrapper[4624]: I0228 04:08:10.252869 4624 generic.go:334] "Generic (PLEG): container finished" podID="9d24e266-6648-42ed-a44e-0b37c5e974a0" containerID="187a086b945cd1b190a3ba3928d9ab828a119c3a708047fcf99cddc37c76e3e7" exitCode=0 Feb 28 04:08:10 crc kubenswrapper[4624]: I0228 04:08:10.252977 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" event={"ID":"9d24e266-6648-42ed-a44e-0b37c5e974a0","Type":"ContainerDied","Data":"187a086b945cd1b190a3ba3928d9ab828a119c3a708047fcf99cddc37c76e3e7"} Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.751686 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.886326 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zmqb\" (UniqueName: \"kubernetes.io/projected/9d24e266-6648-42ed-a44e-0b37c5e974a0-kube-api-access-4zmqb\") pod \"9d24e266-6648-42ed-a44e-0b37c5e974a0\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.886489 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-inventory\") pod \"9d24e266-6648-42ed-a44e-0b37c5e974a0\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.886663 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-ssh-key-openstack-edpm-ipam\") pod \"9d24e266-6648-42ed-a44e-0b37c5e974a0\" (UID: \"9d24e266-6648-42ed-a44e-0b37c5e974a0\") " Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.902670 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d24e266-6648-42ed-a44e-0b37c5e974a0-kube-api-access-4zmqb" (OuterVolumeSpecName: "kube-api-access-4zmqb") pod "9d24e266-6648-42ed-a44e-0b37c5e974a0" (UID: "9d24e266-6648-42ed-a44e-0b37c5e974a0"). InnerVolumeSpecName "kube-api-access-4zmqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.929103 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d24e266-6648-42ed-a44e-0b37c5e974a0" (UID: "9d24e266-6648-42ed-a44e-0b37c5e974a0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.941596 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-inventory" (OuterVolumeSpecName: "inventory") pod "9d24e266-6648-42ed-a44e-0b37c5e974a0" (UID: "9d24e266-6648-42ed-a44e-0b37c5e974a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.989713 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zmqb\" (UniqueName: \"kubernetes.io/projected/9d24e266-6648-42ed-a44e-0b37c5e974a0-kube-api-access-4zmqb\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.989764 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:11 crc kubenswrapper[4624]: I0228 04:08:11.989776 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d24e266-6648-42ed-a44e-0b37c5e974a0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.276297 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" event={"ID":"9d24e266-6648-42ed-a44e-0b37c5e974a0","Type":"ContainerDied","Data":"b5e69d118fa6c75a9786e4894117246e32980f4ba8f9bb641b6fded34eb7e927"} Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.276354 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e69d118fa6c75a9786e4894117246e32980f4ba8f9bb641b6fded34eb7e927" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.276376 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9bfg" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.361268 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz"] Feb 28 04:08:12 crc kubenswrapper[4624]: E0228 04:08:12.361740 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3336f912-c3b2-4483-856f-f93def7322ed" containerName="oc" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.361760 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3336f912-c3b2-4483-856f-f93def7322ed" containerName="oc" Feb 28 04:08:12 crc kubenswrapper[4624]: E0228 04:08:12.361778 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d24e266-6648-42ed-a44e-0b37c5e974a0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.361787 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d24e266-6648-42ed-a44e-0b37c5e974a0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.362025 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3336f912-c3b2-4483-856f-f93def7322ed" containerName="oc" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.362058 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d24e266-6648-42ed-a44e-0b37c5e974a0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.362957 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.365026 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.365925 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.366287 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.366310 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.397988 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz"] Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.502165 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.502314 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.502598 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468hb\" (UniqueName: \"kubernetes.io/projected/a2c9b638-8f30-49a3-a818-05bc76a99b30-kube-api-access-468hb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.605553 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.605946 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.606128 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468hb\" (UniqueName: \"kubernetes.io/projected/a2c9b638-8f30-49a3-a818-05bc76a99b30-kube-api-access-468hb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.613068 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.613208 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.630337 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468hb\" (UniqueName: \"kubernetes.io/projected/a2c9b638-8f30-49a3-a818-05bc76a99b30-kube-api-access-468hb\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-gghhz\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:12 crc kubenswrapper[4624]: I0228 04:08:12.683914 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:08:13 crc kubenswrapper[4624]: I0228 04:08:13.339298 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz"] Feb 28 04:08:14 crc kubenswrapper[4624]: I0228 04:08:14.297034 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" event={"ID":"a2c9b638-8f30-49a3-a818-05bc76a99b30","Type":"ContainerStarted","Data":"8dbb9e9727ea45a0b9cb5518c6b500c2b96dc6290c9145eca6602bea82c558fd"} Feb 28 04:08:14 crc kubenswrapper[4624]: I0228 04:08:14.297841 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" event={"ID":"a2c9b638-8f30-49a3-a818-05bc76a99b30","Type":"ContainerStarted","Data":"8fb3f2c085630a09ca6b474d2d5a76b4d1f9bf6e884911629c086fff4db65143"} Feb 28 04:08:14 crc kubenswrapper[4624]: I0228 04:08:14.330923 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" podStartSLOduration=1.901244334 podStartE2EDuration="2.330903111s" podCreationTimestamp="2026-02-28 04:08:12 +0000 UTC" firstStartedPulling="2026-02-28 04:08:13.35000975 +0000 UTC m=+1948.014049059" lastFinishedPulling="2026-02-28 04:08:13.779668527 +0000 UTC m=+1948.443707836" observedRunningTime="2026-02-28 04:08:14.330647214 +0000 UTC m=+1948.994686523" watchObservedRunningTime="2026-02-28 04:08:14.330903111 +0000 UTC m=+1948.994942420" Feb 28 04:08:18 crc kubenswrapper[4624]: I0228 04:08:18.041001 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vwc78"] Feb 28 04:08:18 crc kubenswrapper[4624]: I0228 04:08:18.059105 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vwc78"] Feb 28 04:08:18 crc kubenswrapper[4624]: I0228 04:08:18.101060 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1bfb27-f028-4b0c-a657-a8435c7fcf72" path="/var/lib/kubelet/pods/cd1bfb27-f028-4b0c-a657-a8435c7fcf72/volumes" Feb 28 04:08:26 crc kubenswrapper[4624]: I0228 04:08:26.879803 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dkzfs"] Feb 28 04:08:26 crc kubenswrapper[4624]: I0228 04:08:26.885182 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:26 crc kubenswrapper[4624]: I0228 04:08:26.902413 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkzfs"] Feb 28 04:08:26 crc kubenswrapper[4624]: I0228 04:08:26.931837 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxk4\" (UniqueName: \"kubernetes.io/projected/42415874-0ea9-4809-90f0-f01f6688b7fb-kube-api-access-tgxk4\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:26 crc kubenswrapper[4624]: I0228 04:08:26.932160 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-catalog-content\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:26 crc kubenswrapper[4624]: I0228 04:08:26.932236 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-utilities\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.034353 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-catalog-content\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.034427 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-utilities\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.034470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxk4\" (UniqueName: \"kubernetes.io/projected/42415874-0ea9-4809-90f0-f01f6688b7fb-kube-api-access-tgxk4\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.035005 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-catalog-content\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.035772 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-utilities\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.062060 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxk4\" (UniqueName: \"kubernetes.io/projected/42415874-0ea9-4809-90f0-f01f6688b7fb-kube-api-access-tgxk4\") pod \"community-operators-dkzfs\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.273194 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:27 crc kubenswrapper[4624]: I0228 04:08:27.819690 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkzfs"] Feb 28 04:08:27 crc kubenswrapper[4624]: W0228 04:08:27.828138 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42415874_0ea9_4809_90f0_f01f6688b7fb.slice/crio-7e3d139ab9ea077957ca2ea9ff282b6a7e616f7aa9366a41e6abd018ceb841a1 WatchSource:0}: Error finding container 7e3d139ab9ea077957ca2ea9ff282b6a7e616f7aa9366a41e6abd018ceb841a1: Status 404 returned error can't find the container with id 7e3d139ab9ea077957ca2ea9ff282b6a7e616f7aa9366a41e6abd018ceb841a1 Feb 28 04:08:28 crc kubenswrapper[4624]: I0228 04:08:28.463412 4624 generic.go:334] "Generic (PLEG): container finished" podID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerID="eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1" exitCode=0 Feb 28 04:08:28 crc kubenswrapper[4624]: I0228 04:08:28.463733 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerDied","Data":"eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1"} Feb 28 04:08:28 crc kubenswrapper[4624]: I0228 04:08:28.463768 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerStarted","Data":"7e3d139ab9ea077957ca2ea9ff282b6a7e616f7aa9366a41e6abd018ceb841a1"} Feb 28 04:08:29 crc kubenswrapper[4624]: I0228 04:08:29.476649 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerStarted","Data":"e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6"} Feb 28 04:08:31 crc kubenswrapper[4624]: I0228 04:08:31.496811 4624 generic.go:334] "Generic (PLEG): container finished" podID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerID="e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6" exitCode=0 Feb 28 04:08:31 crc kubenswrapper[4624]: I0228 04:08:31.496878 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerDied","Data":"e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6"} Feb 28 04:08:32 crc kubenswrapper[4624]: I0228 04:08:32.510047 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerStarted","Data":"541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be"} Feb 28 04:08:32 crc kubenswrapper[4624]: I0228 04:08:32.539581 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dkzfs" podStartSLOduration=3.086887416 podStartE2EDuration="6.539559045s" podCreationTimestamp="2026-02-28 04:08:26 +0000 UTC" firstStartedPulling="2026-02-28 04:08:28.46644572 +0000 UTC m=+1963.130485029" lastFinishedPulling="2026-02-28 04:08:31.919117349 +0000 UTC m=+1966.583156658" observedRunningTime="2026-02-28 04:08:32.539512223 +0000 UTC m=+1967.203551532" watchObservedRunningTime="2026-02-28 04:08:32.539559045 +0000 UTC m=+1967.203598344" Feb 28 04:08:37 crc kubenswrapper[4624]: I0228 04:08:37.274161 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:37 crc kubenswrapper[4624]: I0228 04:08:37.274949 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:38 crc kubenswrapper[4624]: I0228 04:08:38.342509 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dkzfs" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="registry-server" probeResult="failure" output=< Feb 28 04:08:38 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:08:38 crc kubenswrapper[4624]: > Feb 28 04:08:47 crc kubenswrapper[4624]: I0228 04:08:47.329834 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:47 crc kubenswrapper[4624]: I0228 04:08:47.395047 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:47 crc kubenswrapper[4624]: I0228 04:08:47.584850 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkzfs"] Feb 28 04:08:48 crc kubenswrapper[4624]: I0228 04:08:48.711301 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dkzfs" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="registry-server" containerID="cri-o://541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be" gracePeriod=2 Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.312452 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.420049 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-catalog-content\") pod \"42415874-0ea9-4809-90f0-f01f6688b7fb\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.420451 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgxk4\" (UniqueName: \"kubernetes.io/projected/42415874-0ea9-4809-90f0-f01f6688b7fb-kube-api-access-tgxk4\") pod \"42415874-0ea9-4809-90f0-f01f6688b7fb\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.420513 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-utilities\") pod \"42415874-0ea9-4809-90f0-f01f6688b7fb\" (UID: \"42415874-0ea9-4809-90f0-f01f6688b7fb\") " Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.421841 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-utilities" (OuterVolumeSpecName: "utilities") pod "42415874-0ea9-4809-90f0-f01f6688b7fb" (UID: "42415874-0ea9-4809-90f0-f01f6688b7fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.440350 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42415874-0ea9-4809-90f0-f01f6688b7fb-kube-api-access-tgxk4" (OuterVolumeSpecName: "kube-api-access-tgxk4") pod "42415874-0ea9-4809-90f0-f01f6688b7fb" (UID: "42415874-0ea9-4809-90f0-f01f6688b7fb"). InnerVolumeSpecName "kube-api-access-tgxk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.484485 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42415874-0ea9-4809-90f0-f01f6688b7fb" (UID: "42415874-0ea9-4809-90f0-f01f6688b7fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.524253 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.524299 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42415874-0ea9-4809-90f0-f01f6688b7fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.524315 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgxk4\" (UniqueName: \"kubernetes.io/projected/42415874-0ea9-4809-90f0-f01f6688b7fb-kube-api-access-tgxk4\") on node \"crc\" DevicePath \"\"" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.725210 4624 generic.go:334] "Generic (PLEG): container finished" podID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerID="541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be" exitCode=0 Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.725273 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerDied","Data":"541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be"} Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.725304 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkzfs" event={"ID":"42415874-0ea9-4809-90f0-f01f6688b7fb","Type":"ContainerDied","Data":"7e3d139ab9ea077957ca2ea9ff282b6a7e616f7aa9366a41e6abd018ceb841a1"} Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.725323 4624 scope.go:117] "RemoveContainer" containerID="541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.725491 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkzfs" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.771544 4624 scope.go:117] "RemoveContainer" containerID="e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.772992 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkzfs"] Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.783620 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dkzfs"] Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.806033 4624 scope.go:117] "RemoveContainer" containerID="eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.851999 4624 scope.go:117] "RemoveContainer" containerID="541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be" Feb 28 04:08:49 crc kubenswrapper[4624]: E0228 04:08:49.853584 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be\": container with ID starting with 541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be not found: ID does not exist" containerID="541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.853642 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be"} err="failed to get container status \"541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be\": rpc error: code = NotFound desc = could not find container \"541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be\": container with ID starting with 541f5fcbf848789cf1bf922a432b9d4d395a9ef4080b3a18ae9e5da8f6a7b3be not found: ID does not exist" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.853671 4624 scope.go:117] "RemoveContainer" containerID="e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6" Feb 28 04:08:49 crc kubenswrapper[4624]: E0228 04:08:49.853969 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6\": container with ID starting with e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6 not found: ID does not exist" containerID="e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.854028 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6"} err="failed to get container status \"e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6\": rpc error: code = NotFound desc = could not find container \"e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6\": container with ID starting with e873c0c5aeffebde6ce5da77217743b6c6d25103a6e9cebea07e3c58c58276d6 not found: ID does not exist" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.854050 4624 scope.go:117] "RemoveContainer" containerID="eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1" Feb 28 04:08:49 crc kubenswrapper[4624]: E0228 04:08:49.854392 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1\": container with ID starting with eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1 not found: ID does not exist" containerID="eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1" Feb 28 04:08:49 crc kubenswrapper[4624]: I0228 04:08:49.854431 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1"} err="failed to get container status \"eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1\": rpc error: code = NotFound desc = could not find container \"eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1\": container with ID starting with eadee0da27c1ee6968f8383aa7c88909daf860f801e8077e9159fec49d062ec1 not found: ID does not exist" Feb 28 04:08:50 crc kubenswrapper[4624]: I0228 04:08:50.101365 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" path="/var/lib/kubelet/pods/42415874-0ea9-4809-90f0-f01f6688b7fb/volumes" Feb 28 04:09:01 crc kubenswrapper[4624]: I0228 04:09:01.276544 4624 scope.go:117] "RemoveContainer" containerID="65234ee8a5d9538ddd5db7eb9a455c8ea2e2a70eacfb25f2572f7972ef74a39b" Feb 28 04:09:01 crc kubenswrapper[4624]: I0228 04:09:01.322505 4624 scope.go:117] "RemoveContainer" containerID="dcf59992b2c0e0b20cc344713b3563ab31f403ea59534ae3c75f1a6041d139ef" Feb 28 04:09:09 crc kubenswrapper[4624]: I0228 04:09:09.918449 4624 generic.go:334] "Generic (PLEG): container finished" podID="a2c9b638-8f30-49a3-a818-05bc76a99b30" containerID="8dbb9e9727ea45a0b9cb5518c6b500c2b96dc6290c9145eca6602bea82c558fd" exitCode=0 Feb 28 04:09:09 crc kubenswrapper[4624]: I0228 04:09:09.918527 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" event={"ID":"a2c9b638-8f30-49a3-a818-05bc76a99b30","Type":"ContainerDied","Data":"8dbb9e9727ea45a0b9cb5518c6b500c2b96dc6290c9145eca6602bea82c558fd"} Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.467035 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.486761 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-ssh-key-openstack-edpm-ipam\") pod \"a2c9b638-8f30-49a3-a818-05bc76a99b30\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.486980 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-468hb\" (UniqueName: \"kubernetes.io/projected/a2c9b638-8f30-49a3-a818-05bc76a99b30-kube-api-access-468hb\") pod \"a2c9b638-8f30-49a3-a818-05bc76a99b30\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.487031 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-inventory\") pod \"a2c9b638-8f30-49a3-a818-05bc76a99b30\" (UID: \"a2c9b638-8f30-49a3-a818-05bc76a99b30\") " Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.515141 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c9b638-8f30-49a3-a818-05bc76a99b30-kube-api-access-468hb" (OuterVolumeSpecName: "kube-api-access-468hb") pod "a2c9b638-8f30-49a3-a818-05bc76a99b30" (UID: "a2c9b638-8f30-49a3-a818-05bc76a99b30"). InnerVolumeSpecName "kube-api-access-468hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.522228 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2c9b638-8f30-49a3-a818-05bc76a99b30" (UID: "a2c9b638-8f30-49a3-a818-05bc76a99b30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.524271 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-inventory" (OuterVolumeSpecName: "inventory") pod "a2c9b638-8f30-49a3-a818-05bc76a99b30" (UID: "a2c9b638-8f30-49a3-a818-05bc76a99b30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.590372 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.590442 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-468hb\" (UniqueName: \"kubernetes.io/projected/a2c9b638-8f30-49a3-a818-05bc76a99b30-kube-api-access-468hb\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.590457 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2c9b638-8f30-49a3-a818-05bc76a99b30-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.941857 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" event={"ID":"a2c9b638-8f30-49a3-a818-05bc76a99b30","Type":"ContainerDied","Data":"8fb3f2c085630a09ca6b474d2d5a76b4d1f9bf6e884911629c086fff4db65143"} Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.942204 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb3f2c085630a09ca6b474d2d5a76b4d1f9bf6e884911629c086fff4db65143" Feb 28 04:09:11 crc kubenswrapper[4624]: I0228 04:09:11.941943 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-gghhz" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.103519 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7qc65"] Feb 28 04:09:12 crc kubenswrapper[4624]: E0228 04:09:12.104056 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="registry-server" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.104096 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="registry-server" Feb 28 04:09:12 crc kubenswrapper[4624]: E0228 04:09:12.104112 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="extract-utilities" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.104120 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="extract-utilities" Feb 28 04:09:12 crc kubenswrapper[4624]: E0228 04:09:12.104128 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="extract-content" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.104135 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="extract-content" Feb 28 04:09:12 crc kubenswrapper[4624]: E0228 04:09:12.104149 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c9b638-8f30-49a3-a818-05bc76a99b30" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.104157 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c9b638-8f30-49a3-a818-05bc76a99b30" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.104356 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="42415874-0ea9-4809-90f0-f01f6688b7fb" containerName="registry-server" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.104379 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c9b638-8f30-49a3-a818-05bc76a99b30" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.105173 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.107551 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.107994 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.109920 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.119337 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.150374 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7qc65"] Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.203487 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.203592 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhwzm\" (UniqueName: \"kubernetes.io/projected/077546b4-fddd-40c3-866a-714afa3a4f2f-kube-api-access-hhwzm\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.203733 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.305905 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.306066 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhwzm\" (UniqueName: \"kubernetes.io/projected/077546b4-fddd-40c3-866a-714afa3a4f2f-kube-api-access-hhwzm\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.306211 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.322408 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.323665 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.331355 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhwzm\" (UniqueName: \"kubernetes.io/projected/077546b4-fddd-40c3-866a-714afa3a4f2f-kube-api-access-hhwzm\") pod \"ssh-known-hosts-edpm-deployment-7qc65\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:12 crc kubenswrapper[4624]: I0228 04:09:12.434799 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:13 crc kubenswrapper[4624]: I0228 04:09:13.325186 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7qc65"] Feb 28 04:09:13 crc kubenswrapper[4624]: I0228 04:09:13.960598 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" event={"ID":"077546b4-fddd-40c3-866a-714afa3a4f2f","Type":"ContainerStarted","Data":"6cb8d6e0f444ed440a5a639d0ad8a545abf6d6158c93dc586f3f8fc8dc254323"} Feb 28 04:09:14 crc kubenswrapper[4624]: I0228 04:09:14.974874 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" event={"ID":"077546b4-fddd-40c3-866a-714afa3a4f2f","Type":"ContainerStarted","Data":"aa9d980a6555e5fabd1f4123bf35d574eff7fb948de84441f0b6449c64680f57"} Feb 28 04:09:15 crc kubenswrapper[4624]: I0228 04:09:15.020836 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" podStartSLOduration=2.597759396 podStartE2EDuration="3.020789711s" podCreationTimestamp="2026-02-28 04:09:12 +0000 UTC" firstStartedPulling="2026-02-28 04:09:13.360244827 +0000 UTC m=+2008.024284136" lastFinishedPulling="2026-02-28 04:09:13.783275142 +0000 UTC m=+2008.447314451" observedRunningTime="2026-02-28 04:09:15.006537755 +0000 UTC m=+2009.670577064" watchObservedRunningTime="2026-02-28 04:09:15.020789711 +0000 UTC m=+2009.684829020" Feb 28 04:09:21 crc kubenswrapper[4624]: I0228 04:09:21.048790 4624 generic.go:334] "Generic (PLEG): container finished" podID="077546b4-fddd-40c3-866a-714afa3a4f2f" containerID="aa9d980a6555e5fabd1f4123bf35d574eff7fb948de84441f0b6449c64680f57" exitCode=0 Feb 28 04:09:21 crc kubenswrapper[4624]: I0228 04:09:21.048919 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" event={"ID":"077546b4-fddd-40c3-866a-714afa3a4f2f","Type":"ContainerDied","Data":"aa9d980a6555e5fabd1f4123bf35d574eff7fb948de84441f0b6449c64680f57"} Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.535297 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.541096 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhwzm\" (UniqueName: \"kubernetes.io/projected/077546b4-fddd-40c3-866a-714afa3a4f2f-kube-api-access-hhwzm\") pod \"077546b4-fddd-40c3-866a-714afa3a4f2f\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.541186 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-ssh-key-openstack-edpm-ipam\") pod \"077546b4-fddd-40c3-866a-714afa3a4f2f\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.541230 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-inventory-0\") pod \"077546b4-fddd-40c3-866a-714afa3a4f2f\" (UID: \"077546b4-fddd-40c3-866a-714afa3a4f2f\") " Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.563640 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077546b4-fddd-40c3-866a-714afa3a4f2f-kube-api-access-hhwzm" (OuterVolumeSpecName: "kube-api-access-hhwzm") pod "077546b4-fddd-40c3-866a-714afa3a4f2f" (UID: "077546b4-fddd-40c3-866a-714afa3a4f2f"). InnerVolumeSpecName "kube-api-access-hhwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.609980 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "077546b4-fddd-40c3-866a-714afa3a4f2f" (UID: "077546b4-fddd-40c3-866a-714afa3a4f2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.611014 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "077546b4-fddd-40c3-866a-714afa3a4f2f" (UID: "077546b4-fddd-40c3-866a-714afa3a4f2f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.644272 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhwzm\" (UniqueName: \"kubernetes.io/projected/077546b4-fddd-40c3-866a-714afa3a4f2f-kube-api-access-hhwzm\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.644320 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:22 crc kubenswrapper[4624]: I0228 04:09:22.644333 4624 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/077546b4-fddd-40c3-866a-714afa3a4f2f-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.072865 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" event={"ID":"077546b4-fddd-40c3-866a-714afa3a4f2f","Type":"ContainerDied","Data":"6cb8d6e0f444ed440a5a639d0ad8a545abf6d6158c93dc586f3f8fc8dc254323"} Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.072937 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb8d6e0f444ed440a5a639d0ad8a545abf6d6158c93dc586f3f8fc8dc254323" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.073044 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7qc65" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.178111 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh"] Feb 28 04:09:23 crc kubenswrapper[4624]: E0228 04:09:23.178527 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077546b4-fddd-40c3-866a-714afa3a4f2f" containerName="ssh-known-hosts-edpm-deployment" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.178545 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="077546b4-fddd-40c3-866a-714afa3a4f2f" containerName="ssh-known-hosts-edpm-deployment" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.178741 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="077546b4-fddd-40c3-866a-714afa3a4f2f" containerName="ssh-known-hosts-edpm-deployment" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.179450 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.184618 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.184724 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.184878 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.184967 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.202122 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh"] Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.259419 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.259505 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.259557 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhb8\" (UniqueName: \"kubernetes.io/projected/7c73aa0b-4045-4181-849d-8e7a631cdb87-kube-api-access-snhb8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.361981 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhb8\" (UniqueName: \"kubernetes.io/projected/7c73aa0b-4045-4181-849d-8e7a631cdb87-kube-api-access-snhb8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.362263 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.362388 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.368285 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.368287 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.391314 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhb8\" (UniqueName: \"kubernetes.io/projected/7c73aa0b-4045-4181-849d-8e7a631cdb87-kube-api-access-snhb8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nqpfh\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.499041 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:23 crc kubenswrapper[4624]: I0228 04:09:23.930570 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh"] Feb 28 04:09:24 crc kubenswrapper[4624]: I0228 04:09:24.099246 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" event={"ID":"7c73aa0b-4045-4181-849d-8e7a631cdb87","Type":"ContainerStarted","Data":"1c2031dadbc9fbba970000d37b6e71155afc3343f99f12ee96da95beffc4de5d"} Feb 28 04:09:25 crc kubenswrapper[4624]: I0228 04:09:25.131603 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" event={"ID":"7c73aa0b-4045-4181-849d-8e7a631cdb87","Type":"ContainerStarted","Data":"438181c4c4852a4dbf84bb6a7c4c1464f527e8d801f670097f5335f53f46048d"} Feb 28 04:09:25 crc kubenswrapper[4624]: I0228 04:09:25.151429 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" podStartSLOduration=1.7476076090000001 podStartE2EDuration="2.151406402s" podCreationTimestamp="2026-02-28 04:09:23 +0000 UTC" firstStartedPulling="2026-02-28 04:09:23.947181095 +0000 UTC m=+2018.611220404" lastFinishedPulling="2026-02-28 04:09:24.350979888 +0000 UTC m=+2019.015019197" observedRunningTime="2026-02-28 04:09:25.148105382 +0000 UTC m=+2019.812144691" watchObservedRunningTime="2026-02-28 04:09:25.151406402 +0000 UTC m=+2019.815445711" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.107906 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qtdsq"] Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.110460 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.129742 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtdsq"] Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.216376 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-catalog-content\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.218156 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85bg\" (UniqueName: \"kubernetes.io/projected/83fccd4a-5eb5-4443-8449-d40608265c9d-kube-api-access-p85bg\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.218316 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-utilities\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.319884 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p85bg\" (UniqueName: \"kubernetes.io/projected/83fccd4a-5eb5-4443-8449-d40608265c9d-kube-api-access-p85bg\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.319994 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-utilities\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.320059 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-catalog-content\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.320749 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-catalog-content\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.321312 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-utilities\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.342144 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p85bg\" (UniqueName: \"kubernetes.io/projected/83fccd4a-5eb5-4443-8449-d40608265c9d-kube-api-access-p85bg\") pod \"redhat-operators-qtdsq\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.436492 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:28 crc kubenswrapper[4624]: I0228 04:09:28.966069 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtdsq"] Feb 28 04:09:28 crc kubenswrapper[4624]: W0228 04:09:28.967583 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83fccd4a_5eb5_4443_8449_d40608265c9d.slice/crio-af0f73d9e69efaeaf38dcfd10bf5bbc68d750607001c2fb5b793b7416e02c303 WatchSource:0}: Error finding container af0f73d9e69efaeaf38dcfd10bf5bbc68d750607001c2fb5b793b7416e02c303: Status 404 returned error can't find the container with id af0f73d9e69efaeaf38dcfd10bf5bbc68d750607001c2fb5b793b7416e02c303 Feb 28 04:09:29 crc kubenswrapper[4624]: I0228 04:09:29.181892 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerStarted","Data":"af0f73d9e69efaeaf38dcfd10bf5bbc68d750607001c2fb5b793b7416e02c303"} Feb 28 04:09:30 crc kubenswrapper[4624]: I0228 04:09:30.194773 4624 generic.go:334] "Generic (PLEG): container finished" podID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerID="d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf" exitCode=0 Feb 28 04:09:30 crc kubenswrapper[4624]: I0228 04:09:30.194867 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerDied","Data":"d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf"} Feb 28 04:09:31 crc kubenswrapper[4624]: I0228 04:09:31.210074 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerStarted","Data":"abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61"} Feb 28 04:09:33 crc kubenswrapper[4624]: I0228 04:09:33.232805 4624 generic.go:334] "Generic (PLEG): container finished" podID="7c73aa0b-4045-4181-849d-8e7a631cdb87" containerID="438181c4c4852a4dbf84bb6a7c4c1464f527e8d801f670097f5335f53f46048d" exitCode=0 Feb 28 04:09:33 crc kubenswrapper[4624]: I0228 04:09:33.232884 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" event={"ID":"7c73aa0b-4045-4181-849d-8e7a631cdb87","Type":"ContainerDied","Data":"438181c4c4852a4dbf84bb6a7c4c1464f527e8d801f670097f5335f53f46048d"} Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.697164 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.885149 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-ssh-key-openstack-edpm-ipam\") pod \"7c73aa0b-4045-4181-849d-8e7a631cdb87\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.885376 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snhb8\" (UniqueName: \"kubernetes.io/projected/7c73aa0b-4045-4181-849d-8e7a631cdb87-kube-api-access-snhb8\") pod \"7c73aa0b-4045-4181-849d-8e7a631cdb87\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.885508 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-inventory\") pod \"7c73aa0b-4045-4181-849d-8e7a631cdb87\" (UID: \"7c73aa0b-4045-4181-849d-8e7a631cdb87\") " Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.894918 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c73aa0b-4045-4181-849d-8e7a631cdb87-kube-api-access-snhb8" (OuterVolumeSpecName: "kube-api-access-snhb8") pod "7c73aa0b-4045-4181-849d-8e7a631cdb87" (UID: "7c73aa0b-4045-4181-849d-8e7a631cdb87"). InnerVolumeSpecName "kube-api-access-snhb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.925060 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c73aa0b-4045-4181-849d-8e7a631cdb87" (UID: "7c73aa0b-4045-4181-849d-8e7a631cdb87"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.929669 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-inventory" (OuterVolumeSpecName: "inventory") pod "7c73aa0b-4045-4181-849d-8e7a631cdb87" (UID: "7c73aa0b-4045-4181-849d-8e7a631cdb87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.988129 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.988180 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c73aa0b-4045-4181-849d-8e7a631cdb87-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:34 crc kubenswrapper[4624]: I0228 04:09:34.988194 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snhb8\" (UniqueName: \"kubernetes.io/projected/7c73aa0b-4045-4181-849d-8e7a631cdb87-kube-api-access-snhb8\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.256061 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" event={"ID":"7c73aa0b-4045-4181-849d-8e7a631cdb87","Type":"ContainerDied","Data":"1c2031dadbc9fbba970000d37b6e71155afc3343f99f12ee96da95beffc4de5d"} Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.256197 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c2031dadbc9fbba970000d37b6e71155afc3343f99f12ee96da95beffc4de5d" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.256136 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nqpfh" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.371688 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6"] Feb 28 04:09:35 crc kubenswrapper[4624]: E0228 04:09:35.372509 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c73aa0b-4045-4181-849d-8e7a631cdb87" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.372596 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c73aa0b-4045-4181-849d-8e7a631cdb87" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.372872 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c73aa0b-4045-4181-849d-8e7a631cdb87" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.373774 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.379714 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.380507 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.380357 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.381737 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6"] Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.402417 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.409383 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.409432 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghw9f\" (UniqueName: \"kubernetes.io/projected/0bcd1ce2-be32-4778-aced-701605c2cc28-kube-api-access-ghw9f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.409758 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.512308 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.512427 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.512460 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghw9f\" (UniqueName: \"kubernetes.io/projected/0bcd1ce2-be32-4778-aced-701605c2cc28-kube-api-access-ghw9f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.520497 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.527974 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.543253 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghw9f\" (UniqueName: \"kubernetes.io/projected/0bcd1ce2-be32-4778-aced-701605c2cc28-kube-api-access-ghw9f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:35 crc kubenswrapper[4624]: I0228 04:09:35.710210 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:36 crc kubenswrapper[4624]: I0228 04:09:36.354167 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6"] Feb 28 04:09:36 crc kubenswrapper[4624]: E0228 04:09:36.747525 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83fccd4a_5eb5_4443_8449_d40608265c9d.slice/crio-conmon-abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83fccd4a_5eb5_4443_8449_d40608265c9d.slice/crio-abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61.scope\": RecentStats: unable to find data in memory cache]" Feb 28 04:09:37 crc kubenswrapper[4624]: I0228 04:09:37.292112 4624 generic.go:334] "Generic (PLEG): container finished" podID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerID="abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61" exitCode=0 Feb 28 04:09:37 crc kubenswrapper[4624]: I0228 04:09:37.292191 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerDied","Data":"abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61"} Feb 28 04:09:37 crc kubenswrapper[4624]: I0228 04:09:37.295711 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" event={"ID":"0bcd1ce2-be32-4778-aced-701605c2cc28","Type":"ContainerStarted","Data":"d6cbde90f5ae3e85891713b65b1b332316a3d4b4bed4fb4b75852d7a8ab607a9"} Feb 28 04:09:37 crc kubenswrapper[4624]: I0228 04:09:37.295761 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" event={"ID":"0bcd1ce2-be32-4778-aced-701605c2cc28","Type":"ContainerStarted","Data":"6a37d3ab178a62020559e815285b0c7dc5f4ffb0df32cdee7962b0e35f5507cd"} Feb 28 04:09:37 crc kubenswrapper[4624]: I0228 04:09:37.343508 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" podStartSLOduration=1.930755394 podStartE2EDuration="2.3434859s" podCreationTimestamp="2026-02-28 04:09:35 +0000 UTC" firstStartedPulling="2026-02-28 04:09:36.372607198 +0000 UTC m=+2031.036646517" lastFinishedPulling="2026-02-28 04:09:36.785337714 +0000 UTC m=+2031.449377023" observedRunningTime="2026-02-28 04:09:37.338269509 +0000 UTC m=+2032.002308818" watchObservedRunningTime="2026-02-28 04:09:37.3434859 +0000 UTC m=+2032.007525209" Feb 28 04:09:38 crc kubenswrapper[4624]: I0228 04:09:38.311357 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerStarted","Data":"4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063"} Feb 28 04:09:38 crc kubenswrapper[4624]: I0228 04:09:38.349622 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qtdsq" podStartSLOduration=2.8787028919999997 podStartE2EDuration="10.349595198s" podCreationTimestamp="2026-02-28 04:09:28 +0000 UTC" firstStartedPulling="2026-02-28 04:09:30.197672375 +0000 UTC m=+2024.861711684" lastFinishedPulling="2026-02-28 04:09:37.668564681 +0000 UTC m=+2032.332603990" observedRunningTime="2026-02-28 04:09:38.334089848 +0000 UTC m=+2032.998129157" watchObservedRunningTime="2026-02-28 04:09:38.349595198 +0000 UTC m=+2033.013634507" Feb 28 04:09:38 crc kubenswrapper[4624]: I0228 04:09:38.437288 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:38 crc kubenswrapper[4624]: I0228 04:09:38.437736 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:39 crc kubenswrapper[4624]: I0228 04:09:39.493464 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qtdsq" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="registry-server" probeResult="failure" output=< Feb 28 04:09:39 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:09:39 crc kubenswrapper[4624]: > Feb 28 04:09:47 crc kubenswrapper[4624]: E0228 04:09:47.069208 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bcd1ce2_be32_4778_aced_701605c2cc28.slice/crio-d6cbde90f5ae3e85891713b65b1b332316a3d4b4bed4fb4b75852d7a8ab607a9.scope\": RecentStats: unable to find data in memory cache]" Feb 28 04:09:47 crc kubenswrapper[4624]: I0228 04:09:47.418040 4624 generic.go:334] "Generic (PLEG): container finished" podID="0bcd1ce2-be32-4778-aced-701605c2cc28" containerID="d6cbde90f5ae3e85891713b65b1b332316a3d4b4bed4fb4b75852d7a8ab607a9" exitCode=0 Feb 28 04:09:47 crc kubenswrapper[4624]: I0228 04:09:47.418122 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" event={"ID":"0bcd1ce2-be32-4778-aced-701605c2cc28","Type":"ContainerDied","Data":"d6cbde90f5ae3e85891713b65b1b332316a3d4b4bed4fb4b75852d7a8ab607a9"} Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.516990 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.589154 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.769615 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtdsq"] Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.880677 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.946670 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-ssh-key-openstack-edpm-ipam\") pod \"0bcd1ce2-be32-4778-aced-701605c2cc28\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.947241 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghw9f\" (UniqueName: \"kubernetes.io/projected/0bcd1ce2-be32-4778-aced-701605c2cc28-kube-api-access-ghw9f\") pod \"0bcd1ce2-be32-4778-aced-701605c2cc28\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.947356 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-inventory\") pod \"0bcd1ce2-be32-4778-aced-701605c2cc28\" (UID: \"0bcd1ce2-be32-4778-aced-701605c2cc28\") " Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.969507 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcd1ce2-be32-4778-aced-701605c2cc28-kube-api-access-ghw9f" (OuterVolumeSpecName: "kube-api-access-ghw9f") pod "0bcd1ce2-be32-4778-aced-701605c2cc28" (UID: "0bcd1ce2-be32-4778-aced-701605c2cc28"). InnerVolumeSpecName "kube-api-access-ghw9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.983061 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0bcd1ce2-be32-4778-aced-701605c2cc28" (UID: "0bcd1ce2-be32-4778-aced-701605c2cc28"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:48 crc kubenswrapper[4624]: I0228 04:09:48.990372 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-inventory" (OuterVolumeSpecName: "inventory") pod "0bcd1ce2-be32-4778-aced-701605c2cc28" (UID: "0bcd1ce2-be32-4778-aced-701605c2cc28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.050326 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.050405 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghw9f\" (UniqueName: \"kubernetes.io/projected/0bcd1ce2-be32-4778-aced-701605c2cc28-kube-api-access-ghw9f\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.050421 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bcd1ce2-be32-4778-aced-701605c2cc28-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.454610 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" event={"ID":"0bcd1ce2-be32-4778-aced-701605c2cc28","Type":"ContainerDied","Data":"6a37d3ab178a62020559e815285b0c7dc5f4ffb0df32cdee7962b0e35f5507cd"} Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.454721 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a37d3ab178a62020559e815285b0c7dc5f4ffb0df32cdee7962b0e35f5507cd" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.456366 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.539889 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.539975 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.555011 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png"] Feb 28 04:09:49 crc kubenswrapper[4624]: E0228 04:09:49.555653 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcd1ce2-be32-4778-aced-701605c2cc28" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.555679 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcd1ce2-be32-4778-aced-701605c2cc28" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.555968 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcd1ce2-be32-4778-aced-701605c2cc28" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.557053 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.560505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.561337 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.561487 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.561718 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.561948 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.562167 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.563050 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567036 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567163 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567198 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567220 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567250 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567278 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567308 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567515 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567570 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567595 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567620 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wm76\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-kube-api-access-7wm76\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567663 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567705 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.567872 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.568325 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.578437 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png"] Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.669962 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670049 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670078 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wm76\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-kube-api-access-7wm76\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670252 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670297 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670334 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670363 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670398 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670418 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670436 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670459 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670483 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670512 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.670539 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.674534 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.676270 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.677467 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.678197 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.678343 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.678425 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.679608 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.693532 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.695211 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.695916 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.695959 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.696784 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wm76\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-kube-api-access-7wm76\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.709265 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.714324 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-67png\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:49 crc kubenswrapper[4624]: I0228 04:09:49.887919 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:09:50 crc kubenswrapper[4624]: I0228 04:09:50.468710 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qtdsq" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="registry-server" containerID="cri-o://4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063" gracePeriod=2 Feb 28 04:09:50 crc kubenswrapper[4624]: I0228 04:09:50.578628 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png"] Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.016369 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.104059 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p85bg\" (UniqueName: \"kubernetes.io/projected/83fccd4a-5eb5-4443-8449-d40608265c9d-kube-api-access-p85bg\") pod \"83fccd4a-5eb5-4443-8449-d40608265c9d\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.104244 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-utilities\") pod \"83fccd4a-5eb5-4443-8449-d40608265c9d\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.104298 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-catalog-content\") pod \"83fccd4a-5eb5-4443-8449-d40608265c9d\" (UID: \"83fccd4a-5eb5-4443-8449-d40608265c9d\") " Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.106358 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-utilities" (OuterVolumeSpecName: "utilities") pod "83fccd4a-5eb5-4443-8449-d40608265c9d" (UID: "83fccd4a-5eb5-4443-8449-d40608265c9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.110053 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fccd4a-5eb5-4443-8449-d40608265c9d-kube-api-access-p85bg" (OuterVolumeSpecName: "kube-api-access-p85bg") pod "83fccd4a-5eb5-4443-8449-d40608265c9d" (UID: "83fccd4a-5eb5-4443-8449-d40608265c9d"). InnerVolumeSpecName "kube-api-access-p85bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.206420 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p85bg\" (UniqueName: \"kubernetes.io/projected/83fccd4a-5eb5-4443-8449-d40608265c9d-kube-api-access-p85bg\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.206480 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.261578 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83fccd4a-5eb5-4443-8449-d40608265c9d" (UID: "83fccd4a-5eb5-4443-8449-d40608265c9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.308679 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fccd4a-5eb5-4443-8449-d40608265c9d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.483956 4624 generic.go:334] "Generic (PLEG): container finished" podID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerID="4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063" exitCode=0 Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.484059 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdsq" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.484051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerDied","Data":"4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063"} Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.484189 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdsq" event={"ID":"83fccd4a-5eb5-4443-8449-d40608265c9d","Type":"ContainerDied","Data":"af0f73d9e69efaeaf38dcfd10bf5bbc68d750607001c2fb5b793b7416e02c303"} Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.484213 4624 scope.go:117] "RemoveContainer" containerID="4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.487769 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" event={"ID":"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b","Type":"ContainerStarted","Data":"6c5b52bde5513e977555a26cd1f68254f4a18c0ec650405bcb9a6ad892865111"} Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.487823 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" event={"ID":"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b","Type":"ContainerStarted","Data":"42a66dfb4ed87271e242cb88a4757972a6d57b8e798b2b2d8fa27ec712cb4859"} Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.526327 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" podStartSLOduration=2.077182946 podStartE2EDuration="2.526304868s" podCreationTimestamp="2026-02-28 04:09:49 +0000 UTC" firstStartedPulling="2026-02-28 04:09:50.601659118 +0000 UTC m=+2045.265698427" lastFinishedPulling="2026-02-28 04:09:51.05078104 +0000 UTC m=+2045.714820349" observedRunningTime="2026-02-28 04:09:51.508654749 +0000 UTC m=+2046.172694058" watchObservedRunningTime="2026-02-28 04:09:51.526304868 +0000 UTC m=+2046.190344177" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.526557 4624 scope.go:117] "RemoveContainer" containerID="abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.562489 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtdsq"] Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.562711 4624 scope.go:117] "RemoveContainer" containerID="d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.581379 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qtdsq"] Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.599452 4624 scope.go:117] "RemoveContainer" containerID="4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063" Feb 28 04:09:51 crc kubenswrapper[4624]: E0228 04:09:51.602436 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063\": container with ID starting with 4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063 not found: ID does not exist" containerID="4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.602487 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063"} err="failed to get container status \"4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063\": rpc error: code = NotFound desc = could not find container \"4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063\": container with ID starting with 4c1c01cdb4c709c7e6502df57125b6e90b904765e59c29cedfd2a52c3c448063 not found: ID does not exist" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.602519 4624 scope.go:117] "RemoveContainer" containerID="abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61" Feb 28 04:09:51 crc kubenswrapper[4624]: E0228 04:09:51.603336 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61\": container with ID starting with abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61 not found: ID does not exist" containerID="abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.603368 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61"} err="failed to get container status \"abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61\": rpc error: code = NotFound desc = could not find container \"abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61\": container with ID starting with abcb0f2085291f32cf314c9c466841cf4b96bff0f5769806522edd82c61d9b61 not found: ID does not exist" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.603387 4624 scope.go:117] "RemoveContainer" containerID="d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf" Feb 28 04:09:51 crc kubenswrapper[4624]: E0228 04:09:51.603934 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf\": container with ID starting with d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf not found: ID does not exist" containerID="d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf" Feb 28 04:09:51 crc kubenswrapper[4624]: I0228 04:09:51.603955 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf"} err="failed to get container status \"d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf\": rpc error: code = NotFound desc = could not find container \"d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf\": container with ID starting with d24d5f11086bf5b52bae2e0b5743d2b86fc061feb2616a1b4e8c876ab9a9d3cf not found: ID does not exist" Feb 28 04:09:52 crc kubenswrapper[4624]: I0228 04:09:52.098397 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" path="/var/lib/kubelet/pods/83fccd4a-5eb5-4443-8449-d40608265c9d/volumes" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.149508 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537530-wjkfc"] Feb 28 04:10:00 crc kubenswrapper[4624]: E0228 04:10:00.152126 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="extract-content" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.152155 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="extract-content" Feb 28 04:10:00 crc kubenswrapper[4624]: E0228 04:10:00.152209 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="registry-server" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.152218 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="registry-server" Feb 28 04:10:00 crc kubenswrapper[4624]: E0228 04:10:00.152270 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="extract-utilities" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.152281 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="extract-utilities" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.152633 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fccd4a-5eb5-4443-8449-d40608265c9d" containerName="registry-server" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.153963 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.156757 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.158512 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.159194 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.172726 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537530-wjkfc"] Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.264136 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72skk\" (UniqueName: \"kubernetes.io/projected/d65556c4-c00e-4478-808e-a3f869783e84-kube-api-access-72skk\") pod \"auto-csr-approver-29537530-wjkfc\" (UID: \"d65556c4-c00e-4478-808e-a3f869783e84\") " pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.367882 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72skk\" (UniqueName: \"kubernetes.io/projected/d65556c4-c00e-4478-808e-a3f869783e84-kube-api-access-72skk\") pod \"auto-csr-approver-29537530-wjkfc\" (UID: \"d65556c4-c00e-4478-808e-a3f869783e84\") " pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.390004 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72skk\" (UniqueName: \"kubernetes.io/projected/d65556c4-c00e-4478-808e-a3f869783e84-kube-api-access-72skk\") pod \"auto-csr-approver-29537530-wjkfc\" (UID: \"d65556c4-c00e-4478-808e-a3f869783e84\") " pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.481988 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:00 crc kubenswrapper[4624]: I0228 04:10:00.956313 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537530-wjkfc"] Feb 28 04:10:01 crc kubenswrapper[4624]: I0228 04:10:01.609155 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" event={"ID":"d65556c4-c00e-4478-808e-a3f869783e84","Type":"ContainerStarted","Data":"6c951c322cea91919430e4560ea62e3e1f7817483ca25a0d0dd923e864e0c712"} Feb 28 04:10:02 crc kubenswrapper[4624]: I0228 04:10:02.623279 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" event={"ID":"d65556c4-c00e-4478-808e-a3f869783e84","Type":"ContainerStarted","Data":"16b088b1226bc33dd8ced5530e13a2691df082cd51eff4da1b9b788fafe97270"} Feb 28 04:10:02 crc kubenswrapper[4624]: I0228 04:10:02.648699 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" podStartSLOduration=1.7524712569999998 podStartE2EDuration="2.648675756s" podCreationTimestamp="2026-02-28 04:10:00 +0000 UTC" firstStartedPulling="2026-02-28 04:10:00.965996732 +0000 UTC m=+2055.630036041" lastFinishedPulling="2026-02-28 04:10:01.862201221 +0000 UTC m=+2056.526240540" observedRunningTime="2026-02-28 04:10:02.644728569 +0000 UTC m=+2057.308767878" watchObservedRunningTime="2026-02-28 04:10:02.648675756 +0000 UTC m=+2057.312715065" Feb 28 04:10:03 crc kubenswrapper[4624]: I0228 04:10:03.636096 4624 generic.go:334] "Generic (PLEG): container finished" podID="d65556c4-c00e-4478-808e-a3f869783e84" containerID="16b088b1226bc33dd8ced5530e13a2691df082cd51eff4da1b9b788fafe97270" exitCode=0 Feb 28 04:10:03 crc kubenswrapper[4624]: I0228 04:10:03.636341 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" event={"ID":"d65556c4-c00e-4478-808e-a3f869783e84","Type":"ContainerDied","Data":"16b088b1226bc33dd8ced5530e13a2691df082cd51eff4da1b9b788fafe97270"} Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.094355 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.202054 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72skk\" (UniqueName: \"kubernetes.io/projected/d65556c4-c00e-4478-808e-a3f869783e84-kube-api-access-72skk\") pod \"d65556c4-c00e-4478-808e-a3f869783e84\" (UID: \"d65556c4-c00e-4478-808e-a3f869783e84\") " Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.224208 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65556c4-c00e-4478-808e-a3f869783e84-kube-api-access-72skk" (OuterVolumeSpecName: "kube-api-access-72skk") pod "d65556c4-c00e-4478-808e-a3f869783e84" (UID: "d65556c4-c00e-4478-808e-a3f869783e84"). InnerVolumeSpecName "kube-api-access-72skk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.306647 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72skk\" (UniqueName: \"kubernetes.io/projected/d65556c4-c00e-4478-808e-a3f869783e84-kube-api-access-72skk\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.662387 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" event={"ID":"d65556c4-c00e-4478-808e-a3f869783e84","Type":"ContainerDied","Data":"6c951c322cea91919430e4560ea62e3e1f7817483ca25a0d0dd923e864e0c712"} Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.662439 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c951c322cea91919430e4560ea62e3e1f7817483ca25a0d0dd923e864e0c712" Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.662524 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537530-wjkfc" Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.742506 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537524-4rsbj"] Feb 28 04:10:05 crc kubenswrapper[4624]: I0228 04:10:05.760943 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537524-4rsbj"] Feb 28 04:10:06 crc kubenswrapper[4624]: I0228 04:10:06.103561 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56989800-b2b9-44f7-9dfb-c94eb5166870" path="/var/lib/kubelet/pods/56989800-b2b9-44f7-9dfb-c94eb5166870/volumes" Feb 28 04:10:19 crc kubenswrapper[4624]: I0228 04:10:19.539584 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:10:19 crc kubenswrapper[4624]: I0228 04:10:19.540479 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:10:31 crc kubenswrapper[4624]: I0228 04:10:31.968001 4624 generic.go:334] "Generic (PLEG): container finished" podID="b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" containerID="6c5b52bde5513e977555a26cd1f68254f4a18c0ec650405bcb9a6ad892865111" exitCode=0 Feb 28 04:10:31 crc kubenswrapper[4624]: I0228 04:10:31.968129 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" event={"ID":"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b","Type":"ContainerDied","Data":"6c5b52bde5513e977555a26cd1f68254f4a18c0ec650405bcb9a6ad892865111"} Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.482424 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.631003 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.631715 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.631754 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wm76\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-kube-api-access-7wm76\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.631928 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ovn-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.631972 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-nova-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632022 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-repo-setup-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632071 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-neutron-metadata-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632171 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-telemetry-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632244 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632313 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632464 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-bootstrap-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632583 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-inventory\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632633 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ssh-key-openstack-edpm-ipam\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.632681 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-libvirt-combined-ca-bundle\") pod \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\" (UID: \"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b\") " Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.640269 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.640432 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.641413 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.644874 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.661827 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.661901 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.663064 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.663428 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-kube-api-access-7wm76" (OuterVolumeSpecName: "kube-api-access-7wm76") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "kube-api-access-7wm76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.663505 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.663574 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.664403 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.664694 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.675235 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.679008 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-inventory" (OuterVolumeSpecName: "inventory") pod "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" (UID: "b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734130 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734170 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734183 4624 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734194 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734208 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734219 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wm76\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-kube-api-access-7wm76\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734232 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734241 4624 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734250 4624 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734260 4624 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734269 4624 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734279 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734288 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:33 crc kubenswrapper[4624]: I0228 04:10:33.734301 4624 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.000897 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" event={"ID":"b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b","Type":"ContainerDied","Data":"42a66dfb4ed87271e242cb88a4757972a6d57b8e798b2b2d8fa27ec712cb4859"} Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.001042 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a66dfb4ed87271e242cb88a4757972a6d57b8e798b2b2d8fa27ec712cb4859" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.001274 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-67png" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.249108 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq"] Feb 28 04:10:34 crc kubenswrapper[4624]: E0228 04:10:34.249756 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65556c4-c00e-4478-808e-a3f869783e84" containerName="oc" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.249789 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65556c4-c00e-4478-808e-a3f869783e84" containerName="oc" Feb 28 04:10:34 crc kubenswrapper[4624]: E0228 04:10:34.249835 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.249849 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.250209 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.250257 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65556c4-c00e-4478-808e-a3f869783e84" containerName="oc" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.251784 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.258858 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.259016 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.259116 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.259280 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.268554 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.289398 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq"] Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.351822 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxx4s\" (UniqueName: \"kubernetes.io/projected/e0224a59-2832-42cb-91f3-e0f12db48a81-kube-api-access-wxx4s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.351871 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.351973 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.352024 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0224a59-2832-42cb-91f3-e0f12db48a81-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.352162 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.454817 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.454934 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.454966 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0224a59-2832-42cb-91f3-e0f12db48a81-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.455036 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.455205 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxx4s\" (UniqueName: \"kubernetes.io/projected/e0224a59-2832-42cb-91f3-e0f12db48a81-kube-api-access-wxx4s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.456815 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0224a59-2832-42cb-91f3-e0f12db48a81-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.462407 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.465252 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.471584 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.472265 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxx4s\" (UniqueName: \"kubernetes.io/projected/e0224a59-2832-42cb-91f3-e0f12db48a81-kube-api-access-wxx4s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks9tq\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:34 crc kubenswrapper[4624]: I0228 04:10:34.588547 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:10:35 crc kubenswrapper[4624]: I0228 04:10:35.237386 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq"] Feb 28 04:10:36 crc kubenswrapper[4624]: I0228 04:10:36.023541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" event={"ID":"e0224a59-2832-42cb-91f3-e0f12db48a81","Type":"ContainerStarted","Data":"532a51d62c01dd27fbe66dbc3e1191f2b30e95fab1b0379a04e26bba20bc0fff"} Feb 28 04:10:36 crc kubenswrapper[4624]: I0228 04:10:36.024272 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" event={"ID":"e0224a59-2832-42cb-91f3-e0f12db48a81","Type":"ContainerStarted","Data":"f67e6d0ef9391750288b392724ed6917b26495d4262ebe8affe010ad0d071abd"} Feb 28 04:10:49 crc kubenswrapper[4624]: I0228 04:10:49.540264 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:10:49 crc kubenswrapper[4624]: I0228 04:10:49.541248 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:10:49 crc kubenswrapper[4624]: I0228 04:10:49.541321 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:10:49 crc kubenswrapper[4624]: I0228 04:10:49.542544 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f66582844ab2a7a20d79ed088627297535eafe4e568aa67fdb624fc0d6a1269"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:10:49 crc kubenswrapper[4624]: I0228 04:10:49.542612 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://5f66582844ab2a7a20d79ed088627297535eafe4e568aa67fdb624fc0d6a1269" gracePeriod=600 Feb 28 04:10:50 crc kubenswrapper[4624]: I0228 04:10:50.203761 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="5f66582844ab2a7a20d79ed088627297535eafe4e568aa67fdb624fc0d6a1269" exitCode=0 Feb 28 04:10:50 crc kubenswrapper[4624]: I0228 04:10:50.203987 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"5f66582844ab2a7a20d79ed088627297535eafe4e568aa67fdb624fc0d6a1269"} Feb 28 04:10:50 crc kubenswrapper[4624]: I0228 04:10:50.204502 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127"} Feb 28 04:10:50 crc kubenswrapper[4624]: I0228 04:10:50.204525 4624 scope.go:117] "RemoveContainer" containerID="85202f3c2eb7afc45e6c370ce213bc3a2ea8035f4d26bee2d92002bb726cd56a" Feb 28 04:10:50 crc kubenswrapper[4624]: I0228 04:10:50.240068 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" podStartSLOduration=15.800060654 podStartE2EDuration="16.240042262s" podCreationTimestamp="2026-02-28 04:10:34 +0000 UTC" firstStartedPulling="2026-02-28 04:10:35.249074596 +0000 UTC m=+2089.913113905" lastFinishedPulling="2026-02-28 04:10:35.689056194 +0000 UTC m=+2090.353095513" observedRunningTime="2026-02-28 04:10:36.057476797 +0000 UTC m=+2090.721516146" watchObservedRunningTime="2026-02-28 04:10:50.240042262 +0000 UTC m=+2104.904081571" Feb 28 04:10:59 crc kubenswrapper[4624]: I0228 04:10:59.719672 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zf6fq"] Feb 28 04:10:59 crc kubenswrapper[4624]: I0228 04:10:59.722872 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:10:59 crc kubenswrapper[4624]: I0228 04:10:59.736630 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zf6fq"] Feb 28 04:10:59 crc kubenswrapper[4624]: I0228 04:10:59.897698 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-utilities\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:10:59 crc kubenswrapper[4624]: I0228 04:10:59.897810 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4k9h\" (UniqueName: \"kubernetes.io/projected/51ffdccf-f348-4b6c-8148-305deae639d3-kube-api-access-q4k9h\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:10:59 crc kubenswrapper[4624]: I0228 04:10:59.898856 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-catalog-content\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.000768 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-utilities\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.000841 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4k9h\" (UniqueName: \"kubernetes.io/projected/51ffdccf-f348-4b6c-8148-305deae639d3-kube-api-access-q4k9h\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.000906 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-catalog-content\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.002123 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-utilities\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.002657 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-catalog-content\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.034041 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4k9h\" (UniqueName: \"kubernetes.io/projected/51ffdccf-f348-4b6c-8148-305deae639d3-kube-api-access-q4k9h\") pod \"certified-operators-zf6fq\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.062736 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:00 crc kubenswrapper[4624]: I0228 04:11:00.673802 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zf6fq"] Feb 28 04:11:01 crc kubenswrapper[4624]: I0228 04:11:01.317880 4624 generic.go:334] "Generic (PLEG): container finished" podID="51ffdccf-f348-4b6c-8148-305deae639d3" containerID="60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531" exitCode=0 Feb 28 04:11:01 crc kubenswrapper[4624]: I0228 04:11:01.318105 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerDied","Data":"60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531"} Feb 28 04:11:01 crc kubenswrapper[4624]: I0228 04:11:01.318399 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerStarted","Data":"0d4bb5c62ced51ce2a9a1153d4d295f2bf348f4f2882c756bd48c1de830de60f"} Feb 28 04:11:01 crc kubenswrapper[4624]: I0228 04:11:01.320355 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:11:01 crc kubenswrapper[4624]: I0228 04:11:01.535585 4624 scope.go:117] "RemoveContainer" containerID="d24952b1d3ea42c764556d005dcf2dce66d4e577f5c14d3274cf9fa3f9f908a3" Feb 28 04:11:02 crc kubenswrapper[4624]: I0228 04:11:02.328564 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerStarted","Data":"cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85"} Feb 28 04:11:04 crc kubenswrapper[4624]: I0228 04:11:04.347640 4624 generic.go:334] "Generic (PLEG): container finished" podID="51ffdccf-f348-4b6c-8148-305deae639d3" containerID="cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85" exitCode=0 Feb 28 04:11:04 crc kubenswrapper[4624]: I0228 04:11:04.347714 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerDied","Data":"cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85"} Feb 28 04:11:05 crc kubenswrapper[4624]: I0228 04:11:05.360862 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerStarted","Data":"170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984"} Feb 28 04:11:05 crc kubenswrapper[4624]: I0228 04:11:05.392244 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zf6fq" podStartSLOduration=2.98855226 podStartE2EDuration="6.392221512s" podCreationTimestamp="2026-02-28 04:10:59 +0000 UTC" firstStartedPulling="2026-02-28 04:11:01.320032326 +0000 UTC m=+2115.984071655" lastFinishedPulling="2026-02-28 04:11:04.723701598 +0000 UTC m=+2119.387740907" observedRunningTime="2026-02-28 04:11:05.381566753 +0000 UTC m=+2120.045606062" watchObservedRunningTime="2026-02-28 04:11:05.392221512 +0000 UTC m=+2120.056260821" Feb 28 04:11:10 crc kubenswrapper[4624]: I0228 04:11:10.063340 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:10 crc kubenswrapper[4624]: I0228 04:11:10.064191 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:10 crc kubenswrapper[4624]: I0228 04:11:10.172475 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:10 crc kubenswrapper[4624]: I0228 04:11:10.474762 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:10 crc kubenswrapper[4624]: I0228 04:11:10.526586 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zf6fq"] Feb 28 04:11:12 crc kubenswrapper[4624]: I0228 04:11:12.423563 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zf6fq" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="registry-server" containerID="cri-o://170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984" gracePeriod=2 Feb 28 04:11:12 crc kubenswrapper[4624]: I0228 04:11:12.972820 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.031475 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-utilities\") pod \"51ffdccf-f348-4b6c-8148-305deae639d3\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.031611 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-catalog-content\") pod \"51ffdccf-f348-4b6c-8148-305deae639d3\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.031748 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4k9h\" (UniqueName: \"kubernetes.io/projected/51ffdccf-f348-4b6c-8148-305deae639d3-kube-api-access-q4k9h\") pod \"51ffdccf-f348-4b6c-8148-305deae639d3\" (UID: \"51ffdccf-f348-4b6c-8148-305deae639d3\") " Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.033580 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-utilities" (OuterVolumeSpecName: "utilities") pod "51ffdccf-f348-4b6c-8148-305deae639d3" (UID: "51ffdccf-f348-4b6c-8148-305deae639d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.054620 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ffdccf-f348-4b6c-8148-305deae639d3-kube-api-access-q4k9h" (OuterVolumeSpecName: "kube-api-access-q4k9h") pod "51ffdccf-f348-4b6c-8148-305deae639d3" (UID: "51ffdccf-f348-4b6c-8148-305deae639d3"). InnerVolumeSpecName "kube-api-access-q4k9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.093209 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51ffdccf-f348-4b6c-8148-305deae639d3" (UID: "51ffdccf-f348-4b6c-8148-305deae639d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.134921 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4k9h\" (UniqueName: \"kubernetes.io/projected/51ffdccf-f348-4b6c-8148-305deae639d3-kube-api-access-q4k9h\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.135196 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.135530 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ffdccf-f348-4b6c-8148-305deae639d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.435798 4624 generic.go:334] "Generic (PLEG): container finished" podID="51ffdccf-f348-4b6c-8148-305deae639d3" containerID="170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984" exitCode=0 Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.435992 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerDied","Data":"170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984"} Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.436362 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6fq" event={"ID":"51ffdccf-f348-4b6c-8148-305deae639d3","Type":"ContainerDied","Data":"0d4bb5c62ced51ce2a9a1153d4d295f2bf348f4f2882c756bd48c1de830de60f"} Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.436141 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6fq" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.436398 4624 scope.go:117] "RemoveContainer" containerID="170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.467007 4624 scope.go:117] "RemoveContainer" containerID="cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.475396 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zf6fq"] Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.483707 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zf6fq"] Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.505327 4624 scope.go:117] "RemoveContainer" containerID="60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.548493 4624 scope.go:117] "RemoveContainer" containerID="170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984" Feb 28 04:11:13 crc kubenswrapper[4624]: E0228 04:11:13.548969 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984\": container with ID starting with 170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984 not found: ID does not exist" containerID="170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.549008 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984"} err="failed to get container status \"170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984\": rpc error: code = NotFound desc = could not find container \"170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984\": container with ID starting with 170e45d09da5b812783330019b1a62946288d03d161e9cc273f9d8747f65e984 not found: ID does not exist" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.549038 4624 scope.go:117] "RemoveContainer" containerID="cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85" Feb 28 04:11:13 crc kubenswrapper[4624]: E0228 04:11:13.549514 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85\": container with ID starting with cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85 not found: ID does not exist" containerID="cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.549567 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85"} err="failed to get container status \"cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85\": rpc error: code = NotFound desc = could not find container \"cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85\": container with ID starting with cacf98e866bc582475783cb2e1a16dcdd29728357797bb443ae05c5ceb0e1d85 not found: ID does not exist" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.549603 4624 scope.go:117] "RemoveContainer" containerID="60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531" Feb 28 04:11:13 crc kubenswrapper[4624]: E0228 04:11:13.549924 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531\": container with ID starting with 60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531 not found: ID does not exist" containerID="60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531" Feb 28 04:11:13 crc kubenswrapper[4624]: I0228 04:11:13.549966 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531"} err="failed to get container status \"60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531\": rpc error: code = NotFound desc = could not find container \"60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531\": container with ID starting with 60b1b1690ca0c2be90108fc7cb019da72d61df2b46edb3a89b262397f1eb0531 not found: ID does not exist" Feb 28 04:11:14 crc kubenswrapper[4624]: I0228 04:11:14.103013 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" path="/var/lib/kubelet/pods/51ffdccf-f348-4b6c-8148-305deae639d3/volumes" Feb 28 04:11:44 crc kubenswrapper[4624]: I0228 04:11:44.779796 4624 generic.go:334] "Generic (PLEG): container finished" podID="e0224a59-2832-42cb-91f3-e0f12db48a81" containerID="532a51d62c01dd27fbe66dbc3e1191f2b30e95fab1b0379a04e26bba20bc0fff" exitCode=0 Feb 28 04:11:44 crc kubenswrapper[4624]: I0228 04:11:44.779906 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" event={"ID":"e0224a59-2832-42cb-91f3-e0f12db48a81","Type":"ContainerDied","Data":"532a51d62c01dd27fbe66dbc3e1191f2b30e95fab1b0379a04e26bba20bc0fff"} Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.337138 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.481264 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ovn-combined-ca-bundle\") pod \"e0224a59-2832-42cb-91f3-e0f12db48a81\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.481682 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxx4s\" (UniqueName: \"kubernetes.io/projected/e0224a59-2832-42cb-91f3-e0f12db48a81-kube-api-access-wxx4s\") pod \"e0224a59-2832-42cb-91f3-e0f12db48a81\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.481779 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-inventory\") pod \"e0224a59-2832-42cb-91f3-e0f12db48a81\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.481930 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0224a59-2832-42cb-91f3-e0f12db48a81-ovncontroller-config-0\") pod \"e0224a59-2832-42cb-91f3-e0f12db48a81\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.482040 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ssh-key-openstack-edpm-ipam\") pod \"e0224a59-2832-42cb-91f3-e0f12db48a81\" (UID: \"e0224a59-2832-42cb-91f3-e0f12db48a81\") " Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.489372 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0224a59-2832-42cb-91f3-e0f12db48a81-kube-api-access-wxx4s" (OuterVolumeSpecName: "kube-api-access-wxx4s") pod "e0224a59-2832-42cb-91f3-e0f12db48a81" (UID: "e0224a59-2832-42cb-91f3-e0f12db48a81"). InnerVolumeSpecName "kube-api-access-wxx4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.490216 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e0224a59-2832-42cb-91f3-e0f12db48a81" (UID: "e0224a59-2832-42cb-91f3-e0f12db48a81"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.518224 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0224a59-2832-42cb-91f3-e0f12db48a81" (UID: "e0224a59-2832-42cb-91f3-e0f12db48a81"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.520072 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-inventory" (OuterVolumeSpecName: "inventory") pod "e0224a59-2832-42cb-91f3-e0f12db48a81" (UID: "e0224a59-2832-42cb-91f3-e0f12db48a81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.522717 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0224a59-2832-42cb-91f3-e0f12db48a81-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e0224a59-2832-42cb-91f3-e0f12db48a81" (UID: "e0224a59-2832-42cb-91f3-e0f12db48a81"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.584945 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxx4s\" (UniqueName: \"kubernetes.io/projected/e0224a59-2832-42cb-91f3-e0f12db48a81-kube-api-access-wxx4s\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.584986 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.584997 4624 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0224a59-2832-42cb-91f3-e0f12db48a81-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.585009 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.585019 4624 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0224a59-2832-42cb-91f3-e0f12db48a81-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.801984 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" event={"ID":"e0224a59-2832-42cb-91f3-e0f12db48a81","Type":"ContainerDied","Data":"f67e6d0ef9391750288b392724ed6917b26495d4262ebe8affe010ad0d071abd"} Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.802056 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f67e6d0ef9391750288b392724ed6917b26495d4262ebe8affe010ad0d071abd" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.802225 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks9tq" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.926969 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89"] Feb 28 04:11:46 crc kubenswrapper[4624]: E0228 04:11:46.927777 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="extract-content" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.927793 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="extract-content" Feb 28 04:11:46 crc kubenswrapper[4624]: E0228 04:11:46.927821 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0224a59-2832-42cb-91f3-e0f12db48a81" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.927827 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0224a59-2832-42cb-91f3-e0f12db48a81" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 04:11:46 crc kubenswrapper[4624]: E0228 04:11:46.927851 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="extract-utilities" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.927859 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="extract-utilities" Feb 28 04:11:46 crc kubenswrapper[4624]: E0228 04:11:46.927874 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="registry-server" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.927881 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="registry-server" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.928067 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ffdccf-f348-4b6c-8148-305deae639d3" containerName="registry-server" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.928100 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0224a59-2832-42cb-91f3-e0f12db48a81" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.928817 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.931166 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.931266 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.931381 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.931459 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.931717 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.942861 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89"] Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.946419 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.997120 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.997211 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t646r\" (UniqueName: \"kubernetes.io/projected/bef97704-39c1-4a26-b58f-90b76510822c-kube-api-access-t646r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.997297 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.997504 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.997679 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:46 crc kubenswrapper[4624]: I0228 04:11:46.998150 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.101269 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.101737 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.101867 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.102821 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t646r\" (UniqueName: \"kubernetes.io/projected/bef97704-39c1-4a26-b58f-90b76510822c-kube-api-access-t646r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.103018 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.103963 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.106567 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.107364 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.112619 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.116302 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.117019 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.119377 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t646r\" (UniqueName: \"kubernetes.io/projected/bef97704-39c1-4a26-b58f-90b76510822c-kube-api-access-t646r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:47 crc kubenswrapper[4624]: I0228 04:11:47.248399 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:11:48 crc kubenswrapper[4624]: I0228 04:11:47.844220 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89"] Feb 28 04:11:48 crc kubenswrapper[4624]: I0228 04:11:48.826882 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" event={"ID":"bef97704-39c1-4a26-b58f-90b76510822c","Type":"ContainerStarted","Data":"0d25aaa89dc0f4f4bcc050674e80c19cfa20c6d20cdc0c77387e4f315e8ccfb1"} Feb 28 04:11:48 crc kubenswrapper[4624]: I0228 04:11:48.827389 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" event={"ID":"bef97704-39c1-4a26-b58f-90b76510822c","Type":"ContainerStarted","Data":"084fb405a90438ca7be1c5e182dd20f35d0c0349386696a2fdf298646d52e40c"} Feb 28 04:11:48 crc kubenswrapper[4624]: I0228 04:11:48.864255 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" podStartSLOduration=2.3629213399999998 podStartE2EDuration="2.864231758s" podCreationTimestamp="2026-02-28 04:11:46 +0000 UTC" firstStartedPulling="2026-02-28 04:11:47.841293951 +0000 UTC m=+2162.505333270" lastFinishedPulling="2026-02-28 04:11:48.342604369 +0000 UTC m=+2163.006643688" observedRunningTime="2026-02-28 04:11:48.850247069 +0000 UTC m=+2163.514286408" watchObservedRunningTime="2026-02-28 04:11:48.864231758 +0000 UTC m=+2163.528271067" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.171611 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537532-klq9j"] Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.174195 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.180189 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.180827 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.180992 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.186027 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537532-klq9j"] Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.277693 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5dm\" (UniqueName: \"kubernetes.io/projected/a22f2130-3836-4da5-9fb1-1095decd41b2-kube-api-access-zm5dm\") pod \"auto-csr-approver-29537532-klq9j\" (UID: \"a22f2130-3836-4da5-9fb1-1095decd41b2\") " pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.379667 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5dm\" (UniqueName: \"kubernetes.io/projected/a22f2130-3836-4da5-9fb1-1095decd41b2-kube-api-access-zm5dm\") pod \"auto-csr-approver-29537532-klq9j\" (UID: \"a22f2130-3836-4da5-9fb1-1095decd41b2\") " pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.407998 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5dm\" (UniqueName: \"kubernetes.io/projected/a22f2130-3836-4da5-9fb1-1095decd41b2-kube-api-access-zm5dm\") pod \"auto-csr-approver-29537532-klq9j\" (UID: \"a22f2130-3836-4da5-9fb1-1095decd41b2\") " pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:00 crc kubenswrapper[4624]: I0228 04:12:00.516540 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:01 crc kubenswrapper[4624]: I0228 04:12:01.070592 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537532-klq9j"] Feb 28 04:12:01 crc kubenswrapper[4624]: I0228 04:12:01.985439 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537532-klq9j" event={"ID":"a22f2130-3836-4da5-9fb1-1095decd41b2","Type":"ContainerStarted","Data":"91f28d3f21b30b77e46b69e3a1d437b8bae8a52dda4839a298990c9b8458b8c6"} Feb 28 04:12:03 crc kubenswrapper[4624]: I0228 04:12:03.005976 4624 generic.go:334] "Generic (PLEG): container finished" podID="a22f2130-3836-4da5-9fb1-1095decd41b2" containerID="d3bff60e90c6ebb30b4cac9cd31cc5b59cebe4251b72ab1d3ae849c8a217414d" exitCode=0 Feb 28 04:12:03 crc kubenswrapper[4624]: I0228 04:12:03.006051 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537532-klq9j" event={"ID":"a22f2130-3836-4da5-9fb1-1095decd41b2","Type":"ContainerDied","Data":"d3bff60e90c6ebb30b4cac9cd31cc5b59cebe4251b72ab1d3ae849c8a217414d"} Feb 28 04:12:04 crc kubenswrapper[4624]: I0228 04:12:04.421019 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:04 crc kubenswrapper[4624]: I0228 04:12:04.524742 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5dm\" (UniqueName: \"kubernetes.io/projected/a22f2130-3836-4da5-9fb1-1095decd41b2-kube-api-access-zm5dm\") pod \"a22f2130-3836-4da5-9fb1-1095decd41b2\" (UID: \"a22f2130-3836-4da5-9fb1-1095decd41b2\") " Feb 28 04:12:04 crc kubenswrapper[4624]: I0228 04:12:04.533669 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22f2130-3836-4da5-9fb1-1095decd41b2-kube-api-access-zm5dm" (OuterVolumeSpecName: "kube-api-access-zm5dm") pod "a22f2130-3836-4da5-9fb1-1095decd41b2" (UID: "a22f2130-3836-4da5-9fb1-1095decd41b2"). InnerVolumeSpecName "kube-api-access-zm5dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:12:04 crc kubenswrapper[4624]: I0228 04:12:04.627609 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5dm\" (UniqueName: \"kubernetes.io/projected/a22f2130-3836-4da5-9fb1-1095decd41b2-kube-api-access-zm5dm\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:05 crc kubenswrapper[4624]: I0228 04:12:05.032186 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537532-klq9j" event={"ID":"a22f2130-3836-4da5-9fb1-1095decd41b2","Type":"ContainerDied","Data":"91f28d3f21b30b77e46b69e3a1d437b8bae8a52dda4839a298990c9b8458b8c6"} Feb 28 04:12:05 crc kubenswrapper[4624]: I0228 04:12:05.032837 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f28d3f21b30b77e46b69e3a1d437b8bae8a52dda4839a298990c9b8458b8c6" Feb 28 04:12:05 crc kubenswrapper[4624]: I0228 04:12:05.032611 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537532-klq9j" Feb 28 04:12:05 crc kubenswrapper[4624]: I0228 04:12:05.501200 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537526-llbnl"] Feb 28 04:12:05 crc kubenswrapper[4624]: I0228 04:12:05.510243 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537526-llbnl"] Feb 28 04:12:06 crc kubenswrapper[4624]: I0228 04:12:06.128433 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486f976c-5601-4fa6-a076-f1c064661903" path="/var/lib/kubelet/pods/486f976c-5601-4fa6-a076-f1c064661903/volumes" Feb 28 04:12:39 crc kubenswrapper[4624]: I0228 04:12:39.513071 4624 generic.go:334] "Generic (PLEG): container finished" podID="bef97704-39c1-4a26-b58f-90b76510822c" containerID="0d25aaa89dc0f4f4bcc050674e80c19cfa20c6d20cdc0c77387e4f315e8ccfb1" exitCode=0 Feb 28 04:12:39 crc kubenswrapper[4624]: I0228 04:12:39.513216 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" event={"ID":"bef97704-39c1-4a26-b58f-90b76510822c","Type":"ContainerDied","Data":"0d25aaa89dc0f4f4bcc050674e80c19cfa20c6d20cdc0c77387e4f315e8ccfb1"} Feb 28 04:12:40 crc kubenswrapper[4624]: I0228 04:12:40.985576 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.131411 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t646r\" (UniqueName: \"kubernetes.io/projected/bef97704-39c1-4a26-b58f-90b76510822c-kube-api-access-t646r\") pod \"bef97704-39c1-4a26-b58f-90b76510822c\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.131454 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-metadata-combined-ca-bundle\") pod \"bef97704-39c1-4a26-b58f-90b76510822c\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.131715 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"bef97704-39c1-4a26-b58f-90b76510822c\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.131800 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-nova-metadata-neutron-config-0\") pod \"bef97704-39c1-4a26-b58f-90b76510822c\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.131846 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-ssh-key-openstack-edpm-ipam\") pod \"bef97704-39c1-4a26-b58f-90b76510822c\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.131899 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-inventory\") pod \"bef97704-39c1-4a26-b58f-90b76510822c\" (UID: \"bef97704-39c1-4a26-b58f-90b76510822c\") " Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.140357 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef97704-39c1-4a26-b58f-90b76510822c-kube-api-access-t646r" (OuterVolumeSpecName: "kube-api-access-t646r") pod "bef97704-39c1-4a26-b58f-90b76510822c" (UID: "bef97704-39c1-4a26-b58f-90b76510822c"). InnerVolumeSpecName "kube-api-access-t646r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.144490 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bef97704-39c1-4a26-b58f-90b76510822c" (UID: "bef97704-39c1-4a26-b58f-90b76510822c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.164279 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-inventory" (OuterVolumeSpecName: "inventory") pod "bef97704-39c1-4a26-b58f-90b76510822c" (UID: "bef97704-39c1-4a26-b58f-90b76510822c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.166740 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bef97704-39c1-4a26-b58f-90b76510822c" (UID: "bef97704-39c1-4a26-b58f-90b76510822c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.168324 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "bef97704-39c1-4a26-b58f-90b76510822c" (UID: "bef97704-39c1-4a26-b58f-90b76510822c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.187420 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "bef97704-39c1-4a26-b58f-90b76510822c" (UID: "bef97704-39c1-4a26-b58f-90b76510822c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.234858 4624 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.234912 4624 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.234927 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.234944 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.234956 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t646r\" (UniqueName: \"kubernetes.io/projected/bef97704-39c1-4a26-b58f-90b76510822c-kube-api-access-t646r\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.234969 4624 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef97704-39c1-4a26-b58f-90b76510822c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.541766 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" event={"ID":"bef97704-39c1-4a26-b58f-90b76510822c","Type":"ContainerDied","Data":"084fb405a90438ca7be1c5e182dd20f35d0c0349386696a2fdf298646d52e40c"} Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.541825 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084fb405a90438ca7be1c5e182dd20f35d0c0349386696a2fdf298646d52e40c" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.541961 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.668209 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4"] Feb 28 04:12:41 crc kubenswrapper[4624]: E0228 04:12:41.668623 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22f2130-3836-4da5-9fb1-1095decd41b2" containerName="oc" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.668644 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22f2130-3836-4da5-9fb1-1095decd41b2" containerName="oc" Feb 28 04:12:41 crc kubenswrapper[4624]: E0228 04:12:41.668666 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef97704-39c1-4a26-b58f-90b76510822c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.668694 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef97704-39c1-4a26-b58f-90b76510822c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.668890 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef97704-39c1-4a26-b58f-90b76510822c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.668912 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22f2130-3836-4da5-9fb1-1095decd41b2" containerName="oc" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.669983 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.672439 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.672709 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.672942 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.684332 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.684614 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.724684 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4"] Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.847377 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.847814 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.847872 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fznnl\" (UniqueName: \"kubernetes.io/projected/9608e724-9bc7-4040-bfd3-29f159075de8-kube-api-access-fznnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.848058 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.848446 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.951254 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.952206 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.952262 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fznnl\" (UniqueName: \"kubernetes.io/projected/9608e724-9bc7-4040-bfd3-29f159075de8-kube-api-access-fznnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.952316 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.952361 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.957006 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.957314 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.962401 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.970322 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.985320 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fznnl\" (UniqueName: \"kubernetes.io/projected/9608e724-9bc7-4040-bfd3-29f159075de8-kube-api-access-fznnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-p42q4\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:41 crc kubenswrapper[4624]: I0228 04:12:41.997874 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:12:42 crc kubenswrapper[4624]: I0228 04:12:42.595268 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4"] Feb 28 04:12:43 crc kubenswrapper[4624]: I0228 04:12:43.567183 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" event={"ID":"9608e724-9bc7-4040-bfd3-29f159075de8","Type":"ContainerStarted","Data":"9d39f5c650849906133d21822054d6a7cdbaf93b9acbc0212e5b5036ab619091"} Feb 28 04:12:43 crc kubenswrapper[4624]: I0228 04:12:43.567818 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" event={"ID":"9608e724-9bc7-4040-bfd3-29f159075de8","Type":"ContainerStarted","Data":"9e9ac34413ee3cbdae3cf112871d55ecd5e9296a500b861f7d5b0f5c64896e7b"} Feb 28 04:12:43 crc kubenswrapper[4624]: I0228 04:12:43.600139 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" podStartSLOduration=2.082600317 podStartE2EDuration="2.600121044s" podCreationTimestamp="2026-02-28 04:12:41 +0000 UTC" firstStartedPulling="2026-02-28 04:12:42.593292434 +0000 UTC m=+2217.257331753" lastFinishedPulling="2026-02-28 04:12:43.110813151 +0000 UTC m=+2217.774852480" observedRunningTime="2026-02-28 04:12:43.594779609 +0000 UTC m=+2218.258818918" watchObservedRunningTime="2026-02-28 04:12:43.600121044 +0000 UTC m=+2218.264160433" Feb 28 04:12:49 crc kubenswrapper[4624]: I0228 04:12:49.540165 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:12:49 crc kubenswrapper[4624]: I0228 04:12:49.540948 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:13:01 crc kubenswrapper[4624]: I0228 04:13:01.651845 4624 scope.go:117] "RemoveContainer" containerID="2c1d5a2a5930c0cfaf8c052177e691590858fbd88cd70e73e9c983229854d57d" Feb 28 04:13:19 crc kubenswrapper[4624]: I0228 04:13:19.540392 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:13:19 crc kubenswrapper[4624]: I0228 04:13:19.541000 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:13:47 crc kubenswrapper[4624]: I0228 04:13:47.852871 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bpcq9"] Feb 28 04:13:47 crc kubenswrapper[4624]: I0228 04:13:47.877577 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:47 crc kubenswrapper[4624]: I0228 04:13:47.894852 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpcq9"] Feb 28 04:13:47 crc kubenswrapper[4624]: I0228 04:13:47.997645 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqbjw\" (UniqueName: \"kubernetes.io/projected/2949f9af-1641-4269-ba79-140e03fd1b09-kube-api-access-jqbjw\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:47 crc kubenswrapper[4624]: I0228 04:13:47.997783 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-catalog-content\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:47 crc kubenswrapper[4624]: I0228 04:13:47.997876 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-utilities\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.100106 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-utilities\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.100184 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqbjw\" (UniqueName: \"kubernetes.io/projected/2949f9af-1641-4269-ba79-140e03fd1b09-kube-api-access-jqbjw\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.100582 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-catalog-content\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.100629 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-utilities\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.101104 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-catalog-content\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.125055 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqbjw\" (UniqueName: \"kubernetes.io/projected/2949f9af-1641-4269-ba79-140e03fd1b09-kube-api-access-jqbjw\") pod \"redhat-marketplace-bpcq9\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.228711 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:48 crc kubenswrapper[4624]: I0228 04:13:48.875665 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpcq9"] Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.522971 4624 generic.go:334] "Generic (PLEG): container finished" podID="2949f9af-1641-4269-ba79-140e03fd1b09" containerID="0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e" exitCode=0 Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.523019 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerDied","Data":"0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e"} Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.523050 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerStarted","Data":"4e33c06aec27d90f25af8d8612bc3829c378304f3d6337780465665bc904ef20"} Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.540660 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.540738 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.540802 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.541926 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:13:49 crc kubenswrapper[4624]: I0228 04:13:49.542009 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" gracePeriod=600 Feb 28 04:13:49 crc kubenswrapper[4624]: E0228 04:13:49.707120 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:13:50 crc kubenswrapper[4624]: I0228 04:13:50.538208 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" exitCode=0 Feb 28 04:13:50 crc kubenswrapper[4624]: I0228 04:13:50.538270 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127"} Feb 28 04:13:50 crc kubenswrapper[4624]: I0228 04:13:50.538906 4624 scope.go:117] "RemoveContainer" containerID="5f66582844ab2a7a20d79ed088627297535eafe4e568aa67fdb624fc0d6a1269" Feb 28 04:13:50 crc kubenswrapper[4624]: I0228 04:13:50.540566 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:13:50 crc kubenswrapper[4624]: E0228 04:13:50.541052 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:13:50 crc kubenswrapper[4624]: I0228 04:13:50.544995 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerStarted","Data":"79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2"} Feb 28 04:13:51 crc kubenswrapper[4624]: I0228 04:13:51.564850 4624 generic.go:334] "Generic (PLEG): container finished" podID="2949f9af-1641-4269-ba79-140e03fd1b09" containerID="79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2" exitCode=0 Feb 28 04:13:51 crc kubenswrapper[4624]: I0228 04:13:51.564914 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerDied","Data":"79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2"} Feb 28 04:13:52 crc kubenswrapper[4624]: I0228 04:13:52.579696 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerStarted","Data":"e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811"} Feb 28 04:13:58 crc kubenswrapper[4624]: I0228 04:13:58.229531 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:58 crc kubenswrapper[4624]: I0228 04:13:58.230419 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:58 crc kubenswrapper[4624]: I0228 04:13:58.307849 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:58 crc kubenswrapper[4624]: I0228 04:13:58.357294 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bpcq9" podStartSLOduration=8.864867287 podStartE2EDuration="11.357275125s" podCreationTimestamp="2026-02-28 04:13:47 +0000 UTC" firstStartedPulling="2026-02-28 04:13:49.526229909 +0000 UTC m=+2284.190269218" lastFinishedPulling="2026-02-28 04:13:52.018637747 +0000 UTC m=+2286.682677056" observedRunningTime="2026-02-28 04:13:52.610036864 +0000 UTC m=+2287.274076163" watchObservedRunningTime="2026-02-28 04:13:58.357275125 +0000 UTC m=+2293.021314434" Feb 28 04:13:58 crc kubenswrapper[4624]: I0228 04:13:58.719683 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:13:58 crc kubenswrapper[4624]: I0228 04:13:58.781018 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpcq9"] Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.161137 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537534-qtdsr"] Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.162835 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.167611 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.167682 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.167799 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.201580 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537534-qtdsr"] Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.345738 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtc79\" (UniqueName: \"kubernetes.io/projected/c726c37e-59dc-4efb-ac43-697fcf8755e5-kube-api-access-jtc79\") pod \"auto-csr-approver-29537534-qtdsr\" (UID: \"c726c37e-59dc-4efb-ac43-697fcf8755e5\") " pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.449065 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtc79\" (UniqueName: \"kubernetes.io/projected/c726c37e-59dc-4efb-ac43-697fcf8755e5-kube-api-access-jtc79\") pod \"auto-csr-approver-29537534-qtdsr\" (UID: \"c726c37e-59dc-4efb-ac43-697fcf8755e5\") " pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.472149 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtc79\" (UniqueName: \"kubernetes.io/projected/c726c37e-59dc-4efb-ac43-697fcf8755e5-kube-api-access-jtc79\") pod \"auto-csr-approver-29537534-qtdsr\" (UID: \"c726c37e-59dc-4efb-ac43-697fcf8755e5\") " pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.492672 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.671642 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bpcq9" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="registry-server" containerID="cri-o://e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811" gracePeriod=2 Feb 28 04:14:00 crc kubenswrapper[4624]: I0228 04:14:00.844510 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537534-qtdsr"] Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.074048 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.088271 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:14:01 crc kubenswrapper[4624]: E0228 04:14:01.088702 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.266947 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-utilities\") pod \"2949f9af-1641-4269-ba79-140e03fd1b09\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.267702 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-catalog-content\") pod \"2949f9af-1641-4269-ba79-140e03fd1b09\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.267940 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqbjw\" (UniqueName: \"kubernetes.io/projected/2949f9af-1641-4269-ba79-140e03fd1b09-kube-api-access-jqbjw\") pod \"2949f9af-1641-4269-ba79-140e03fd1b09\" (UID: \"2949f9af-1641-4269-ba79-140e03fd1b09\") " Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.268760 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-utilities" (OuterVolumeSpecName: "utilities") pod "2949f9af-1641-4269-ba79-140e03fd1b09" (UID: "2949f9af-1641-4269-ba79-140e03fd1b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.269899 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.276619 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2949f9af-1641-4269-ba79-140e03fd1b09-kube-api-access-jqbjw" (OuterVolumeSpecName: "kube-api-access-jqbjw") pod "2949f9af-1641-4269-ba79-140e03fd1b09" (UID: "2949f9af-1641-4269-ba79-140e03fd1b09"). InnerVolumeSpecName "kube-api-access-jqbjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.308037 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2949f9af-1641-4269-ba79-140e03fd1b09" (UID: "2949f9af-1641-4269-ba79-140e03fd1b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.373235 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2949f9af-1641-4269-ba79-140e03fd1b09-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.373352 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqbjw\" (UniqueName: \"kubernetes.io/projected/2949f9af-1641-4269-ba79-140e03fd1b09-kube-api-access-jqbjw\") on node \"crc\" DevicePath \"\"" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.681338 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" event={"ID":"c726c37e-59dc-4efb-ac43-697fcf8755e5","Type":"ContainerStarted","Data":"e459baa8f8ccac1f6e7540c0149225cf58237477c9d8e6e1915c5915e29565aa"} Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.683213 4624 generic.go:334] "Generic (PLEG): container finished" podID="2949f9af-1641-4269-ba79-140e03fd1b09" containerID="e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811" exitCode=0 Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.683242 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerDied","Data":"e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811"} Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.683273 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpcq9" event={"ID":"2949f9af-1641-4269-ba79-140e03fd1b09","Type":"ContainerDied","Data":"4e33c06aec27d90f25af8d8612bc3829c378304f3d6337780465665bc904ef20"} Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.683300 4624 scope.go:117] "RemoveContainer" containerID="e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.683305 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpcq9" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.727355 4624 scope.go:117] "RemoveContainer" containerID="79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.728556 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpcq9"] Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.740969 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpcq9"] Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.756583 4624 scope.go:117] "RemoveContainer" containerID="0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.820938 4624 scope.go:117] "RemoveContainer" containerID="e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811" Feb 28 04:14:01 crc kubenswrapper[4624]: E0228 04:14:01.821653 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811\": container with ID starting with e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811 not found: ID does not exist" containerID="e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.821715 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811"} err="failed to get container status \"e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811\": rpc error: code = NotFound desc = could not find container \"e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811\": container with ID starting with e09fc037841f064a34dfcd0730526b02e4351a20c553a34863fc66de5a98e811 not found: ID does not exist" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.821756 4624 scope.go:117] "RemoveContainer" containerID="79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2" Feb 28 04:14:01 crc kubenswrapper[4624]: E0228 04:14:01.822273 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2\": container with ID starting with 79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2 not found: ID does not exist" containerID="79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.822304 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2"} err="failed to get container status \"79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2\": rpc error: code = NotFound desc = could not find container \"79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2\": container with ID starting with 79ecd689330e06a74787770adc37ddf5f77d5f2a246b47482cc438cbc2031fe2 not found: ID does not exist" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.822324 4624 scope.go:117] "RemoveContainer" containerID="0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e" Feb 28 04:14:01 crc kubenswrapper[4624]: E0228 04:14:01.822553 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e\": container with ID starting with 0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e not found: ID does not exist" containerID="0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e" Feb 28 04:14:01 crc kubenswrapper[4624]: I0228 04:14:01.822576 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e"} err="failed to get container status \"0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e\": rpc error: code = NotFound desc = could not find container \"0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e\": container with ID starting with 0d5b4e4755960a16716945d4bc136f78e57c06db0b9fe55ec6aee2908ce9f96e not found: ID does not exist" Feb 28 04:14:02 crc kubenswrapper[4624]: I0228 04:14:02.099151 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" path="/var/lib/kubelet/pods/2949f9af-1641-4269-ba79-140e03fd1b09/volumes" Feb 28 04:14:02 crc kubenswrapper[4624]: I0228 04:14:02.693825 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" event={"ID":"c726c37e-59dc-4efb-ac43-697fcf8755e5","Type":"ContainerStarted","Data":"bebc2d44c48672600df2e35350bed6ba19e65c5d993ec76ed3ecdf76297aa40c"} Feb 28 04:14:02 crc kubenswrapper[4624]: I0228 04:14:02.716051 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" podStartSLOduration=1.789564071 podStartE2EDuration="2.716022396s" podCreationTimestamp="2026-02-28 04:14:00 +0000 UTC" firstStartedPulling="2026-02-28 04:14:00.851546433 +0000 UTC m=+2295.515585742" lastFinishedPulling="2026-02-28 04:14:01.778004758 +0000 UTC m=+2296.442044067" observedRunningTime="2026-02-28 04:14:02.713446696 +0000 UTC m=+2297.377486005" watchObservedRunningTime="2026-02-28 04:14:02.716022396 +0000 UTC m=+2297.380061695" Feb 28 04:14:03 crc kubenswrapper[4624]: I0228 04:14:03.708671 4624 generic.go:334] "Generic (PLEG): container finished" podID="c726c37e-59dc-4efb-ac43-697fcf8755e5" containerID="bebc2d44c48672600df2e35350bed6ba19e65c5d993ec76ed3ecdf76297aa40c" exitCode=0 Feb 28 04:14:03 crc kubenswrapper[4624]: I0228 04:14:03.708765 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" event={"ID":"c726c37e-59dc-4efb-ac43-697fcf8755e5","Type":"ContainerDied","Data":"bebc2d44c48672600df2e35350bed6ba19e65c5d993ec76ed3ecdf76297aa40c"} Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.074679 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.169511 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtc79\" (UniqueName: \"kubernetes.io/projected/c726c37e-59dc-4efb-ac43-697fcf8755e5-kube-api-access-jtc79\") pod \"c726c37e-59dc-4efb-ac43-697fcf8755e5\" (UID: \"c726c37e-59dc-4efb-ac43-697fcf8755e5\") " Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.178790 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c726c37e-59dc-4efb-ac43-697fcf8755e5-kube-api-access-jtc79" (OuterVolumeSpecName: "kube-api-access-jtc79") pod "c726c37e-59dc-4efb-ac43-697fcf8755e5" (UID: "c726c37e-59dc-4efb-ac43-697fcf8755e5"). InnerVolumeSpecName "kube-api-access-jtc79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.273524 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtc79\" (UniqueName: \"kubernetes.io/projected/c726c37e-59dc-4efb-ac43-697fcf8755e5-kube-api-access-jtc79\") on node \"crc\" DevicePath \"\"" Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.736023 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" event={"ID":"c726c37e-59dc-4efb-ac43-697fcf8755e5","Type":"ContainerDied","Data":"e459baa8f8ccac1f6e7540c0149225cf58237477c9d8e6e1915c5915e29565aa"} Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.736098 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e459baa8f8ccac1f6e7540c0149225cf58237477c9d8e6e1915c5915e29565aa" Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.736145 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537534-qtdsr" Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.820114 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537528-qrvbh"] Feb 28 04:14:05 crc kubenswrapper[4624]: I0228 04:14:05.834910 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537528-qrvbh"] Feb 28 04:14:06 crc kubenswrapper[4624]: I0228 04:14:06.104420 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3336f912-c3b2-4483-856f-f93def7322ed" path="/var/lib/kubelet/pods/3336f912-c3b2-4483-856f-f93def7322ed/volumes" Feb 28 04:14:15 crc kubenswrapper[4624]: I0228 04:14:15.088187 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:14:15 crc kubenswrapper[4624]: E0228 04:14:15.089155 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:14:27 crc kubenswrapper[4624]: I0228 04:14:27.087611 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:14:27 crc kubenswrapper[4624]: E0228 04:14:27.088918 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:14:38 crc kubenswrapper[4624]: I0228 04:14:38.087746 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:14:38 crc kubenswrapper[4624]: E0228 04:14:38.088997 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:14:49 crc kubenswrapper[4624]: I0228 04:14:49.088658 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:14:49 crc kubenswrapper[4624]: E0228 04:14:49.089686 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.088215 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:15:00 crc kubenswrapper[4624]: E0228 04:15:00.089609 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.160846 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq"] Feb 28 04:15:00 crc kubenswrapper[4624]: E0228 04:15:00.161688 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="extract-content" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.163296 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="extract-content" Feb 28 04:15:00 crc kubenswrapper[4624]: E0228 04:15:00.163406 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="registry-server" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.163486 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="registry-server" Feb 28 04:15:00 crc kubenswrapper[4624]: E0228 04:15:00.164131 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c726c37e-59dc-4efb-ac43-697fcf8755e5" containerName="oc" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.164217 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c726c37e-59dc-4efb-ac43-697fcf8755e5" containerName="oc" Feb 28 04:15:00 crc kubenswrapper[4624]: E0228 04:15:00.164330 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="extract-utilities" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.164418 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="extract-utilities" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.164898 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2949f9af-1641-4269-ba79-140e03fd1b09" containerName="registry-server" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.165015 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c726c37e-59dc-4efb-ac43-697fcf8755e5" containerName="oc" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.166011 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.169551 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.169584 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.176016 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq"] Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.257753 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-secret-volume\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.258107 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzdx\" (UniqueName: \"kubernetes.io/projected/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-kube-api-access-vkzdx\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.258254 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-config-volume\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.360827 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-config-volume\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.360987 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-secret-volume\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.361046 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzdx\" (UniqueName: \"kubernetes.io/projected/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-kube-api-access-vkzdx\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.361972 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-config-volume\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.376470 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-secret-volume\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.382653 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzdx\" (UniqueName: \"kubernetes.io/projected/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-kube-api-access-vkzdx\") pod \"collect-profiles-29537535-xtnrq\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:00 crc kubenswrapper[4624]: I0228 04:15:00.508770 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:01 crc kubenswrapper[4624]: I0228 04:15:01.003627 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq"] Feb 28 04:15:01 crc kubenswrapper[4624]: I0228 04:15:01.321211 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" event={"ID":"7c513c9c-a7f1-4939-b49c-d28e72cc7e87","Type":"ContainerStarted","Data":"f0ea742b647d7066fb66769f4af3e15d9e34ad636da132deb9acd1347c64a25a"} Feb 28 04:15:01 crc kubenswrapper[4624]: I0228 04:15:01.321269 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" event={"ID":"7c513c9c-a7f1-4939-b49c-d28e72cc7e87","Type":"ContainerStarted","Data":"84780358d0367013562cf9e1fb5878b1739a668a61adbfc8b49f9889a85994ff"} Feb 28 04:15:01 crc kubenswrapper[4624]: I0228 04:15:01.342454 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" podStartSLOduration=1.342427132 podStartE2EDuration="1.342427132s" podCreationTimestamp="2026-02-28 04:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:15:01.335517045 +0000 UTC m=+2355.999556394" watchObservedRunningTime="2026-02-28 04:15:01.342427132 +0000 UTC m=+2356.006466441" Feb 28 04:15:01 crc kubenswrapper[4624]: I0228 04:15:01.827827 4624 scope.go:117] "RemoveContainer" containerID="16b37111f35e9a75c1492aeba62a53888af79806814f30fcb2f44a4607d3c1be" Feb 28 04:15:02 crc kubenswrapper[4624]: I0228 04:15:02.332663 4624 generic.go:334] "Generic (PLEG): container finished" podID="7c513c9c-a7f1-4939-b49c-d28e72cc7e87" containerID="f0ea742b647d7066fb66769f4af3e15d9e34ad636da132deb9acd1347c64a25a" exitCode=0 Feb 28 04:15:02 crc kubenswrapper[4624]: I0228 04:15:02.333285 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" event={"ID":"7c513c9c-a7f1-4939-b49c-d28e72cc7e87","Type":"ContainerDied","Data":"f0ea742b647d7066fb66769f4af3e15d9e34ad636da132deb9acd1347c64a25a"} Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.723724 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.854322 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-secret-volume\") pod \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.854494 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzdx\" (UniqueName: \"kubernetes.io/projected/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-kube-api-access-vkzdx\") pod \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.855934 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-config-volume\") pod \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\" (UID: \"7c513c9c-a7f1-4939-b49c-d28e72cc7e87\") " Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.856918 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-config-volume" (OuterVolumeSpecName: "config-volume") pod "7c513c9c-a7f1-4939-b49c-d28e72cc7e87" (UID: "7c513c9c-a7f1-4939-b49c-d28e72cc7e87"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.864687 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7c513c9c-a7f1-4939-b49c-d28e72cc7e87" (UID: "7c513c9c-a7f1-4939-b49c-d28e72cc7e87"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.873458 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-kube-api-access-vkzdx" (OuterVolumeSpecName: "kube-api-access-vkzdx") pod "7c513c9c-a7f1-4939-b49c-d28e72cc7e87" (UID: "7c513c9c-a7f1-4939-b49c-d28e72cc7e87"). InnerVolumeSpecName "kube-api-access-vkzdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.958921 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.958966 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:15:03 crc kubenswrapper[4624]: I0228 04:15:03.958982 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzdx\" (UniqueName: \"kubernetes.io/projected/7c513c9c-a7f1-4939-b49c-d28e72cc7e87-kube-api-access-vkzdx\") on node \"crc\" DevicePath \"\"" Feb 28 04:15:04 crc kubenswrapper[4624]: I0228 04:15:04.352464 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" event={"ID":"7c513c9c-a7f1-4939-b49c-d28e72cc7e87","Type":"ContainerDied","Data":"84780358d0367013562cf9e1fb5878b1739a668a61adbfc8b49f9889a85994ff"} Feb 28 04:15:04 crc kubenswrapper[4624]: I0228 04:15:04.352515 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84780358d0367013562cf9e1fb5878b1739a668a61adbfc8b49f9889a85994ff" Feb 28 04:15:04 crc kubenswrapper[4624]: I0228 04:15:04.352597 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537535-xtnrq" Feb 28 04:15:04 crc kubenswrapper[4624]: I0228 04:15:04.427789 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd"] Feb 28 04:15:04 crc kubenswrapper[4624]: I0228 04:15:04.436384 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537490-bm2gd"] Feb 28 04:15:06 crc kubenswrapper[4624]: I0228 04:15:06.101581 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d82af0-7eea-4c15-af5d-e58d1a0b6721" path="/var/lib/kubelet/pods/11d82af0-7eea-4c15-af5d-e58d1a0b6721/volumes" Feb 28 04:15:13 crc kubenswrapper[4624]: I0228 04:15:13.089438 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:15:13 crc kubenswrapper[4624]: E0228 04:15:13.090533 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:15:26 crc kubenswrapper[4624]: I0228 04:15:26.097995 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:15:26 crc kubenswrapper[4624]: E0228 04:15:26.099476 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:15:41 crc kubenswrapper[4624]: I0228 04:15:41.087750 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:15:41 crc kubenswrapper[4624]: E0228 04:15:41.088769 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:15:53 crc kubenswrapper[4624]: I0228 04:15:53.087146 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:15:53 crc kubenswrapper[4624]: E0228 04:15:53.088222 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.147263 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537536-qzxnm"] Feb 28 04:16:00 crc kubenswrapper[4624]: E0228 04:16:00.148280 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c513c9c-a7f1-4939-b49c-d28e72cc7e87" containerName="collect-profiles" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.148297 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c513c9c-a7f1-4939-b49c-d28e72cc7e87" containerName="collect-profiles" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.148628 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c513c9c-a7f1-4939-b49c-d28e72cc7e87" containerName="collect-profiles" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.149433 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.151873 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.155474 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.155735 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.167138 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537536-qzxnm"] Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.267347 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjgf\" (UniqueName: \"kubernetes.io/projected/626e3e8f-5304-4a60-8538-0ab270497329-kube-api-access-8fjgf\") pod \"auto-csr-approver-29537536-qzxnm\" (UID: \"626e3e8f-5304-4a60-8538-0ab270497329\") " pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.369036 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fjgf\" (UniqueName: \"kubernetes.io/projected/626e3e8f-5304-4a60-8538-0ab270497329-kube-api-access-8fjgf\") pod \"auto-csr-approver-29537536-qzxnm\" (UID: \"626e3e8f-5304-4a60-8538-0ab270497329\") " pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.395918 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fjgf\" (UniqueName: \"kubernetes.io/projected/626e3e8f-5304-4a60-8538-0ab270497329-kube-api-access-8fjgf\") pod \"auto-csr-approver-29537536-qzxnm\" (UID: \"626e3e8f-5304-4a60-8538-0ab270497329\") " pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.468210 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:00 crc kubenswrapper[4624]: I0228 04:16:00.980503 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537536-qzxnm"] Feb 28 04:16:01 crc kubenswrapper[4624]: I0228 04:16:01.958998 4624 scope.go:117] "RemoveContainer" containerID="e144a3f56cecfc8f771aa54c3eb17b7492d2659f7761160abe9f8827cf138078" Feb 28 04:16:01 crc kubenswrapper[4624]: I0228 04:16:01.968132 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" event={"ID":"626e3e8f-5304-4a60-8538-0ab270497329","Type":"ContainerStarted","Data":"5505e5f1591888e7a09db28bc687b1b59aa4ae662892def8fb5def7f4133dd5a"} Feb 28 04:16:02 crc kubenswrapper[4624]: I0228 04:16:02.981059 4624 generic.go:334] "Generic (PLEG): container finished" podID="626e3e8f-5304-4a60-8538-0ab270497329" containerID="da9eec1f701d66e25eff326a724e9ffec374244faaac7ce6cf42d17bbbc2c94f" exitCode=0 Feb 28 04:16:02 crc kubenswrapper[4624]: I0228 04:16:02.981123 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" event={"ID":"626e3e8f-5304-4a60-8538-0ab270497329","Type":"ContainerDied","Data":"da9eec1f701d66e25eff326a724e9ffec374244faaac7ce6cf42d17bbbc2c94f"} Feb 28 04:16:04 crc kubenswrapper[4624]: I0228 04:16:04.365733 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:04 crc kubenswrapper[4624]: I0228 04:16:04.392682 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fjgf\" (UniqueName: \"kubernetes.io/projected/626e3e8f-5304-4a60-8538-0ab270497329-kube-api-access-8fjgf\") pod \"626e3e8f-5304-4a60-8538-0ab270497329\" (UID: \"626e3e8f-5304-4a60-8538-0ab270497329\") " Feb 28 04:16:04 crc kubenswrapper[4624]: I0228 04:16:04.402597 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626e3e8f-5304-4a60-8538-0ab270497329-kube-api-access-8fjgf" (OuterVolumeSpecName: "kube-api-access-8fjgf") pod "626e3e8f-5304-4a60-8538-0ab270497329" (UID: "626e3e8f-5304-4a60-8538-0ab270497329"). InnerVolumeSpecName "kube-api-access-8fjgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:16:04 crc kubenswrapper[4624]: I0228 04:16:04.495622 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fjgf\" (UniqueName: \"kubernetes.io/projected/626e3e8f-5304-4a60-8538-0ab270497329-kube-api-access-8fjgf\") on node \"crc\" DevicePath \"\"" Feb 28 04:16:05 crc kubenswrapper[4624]: I0228 04:16:05.006304 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" event={"ID":"626e3e8f-5304-4a60-8538-0ab270497329","Type":"ContainerDied","Data":"5505e5f1591888e7a09db28bc687b1b59aa4ae662892def8fb5def7f4133dd5a"} Feb 28 04:16:05 crc kubenswrapper[4624]: I0228 04:16:05.006743 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5505e5f1591888e7a09db28bc687b1b59aa4ae662892def8fb5def7f4133dd5a" Feb 28 04:16:05 crc kubenswrapper[4624]: I0228 04:16:05.006403 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537536-qzxnm" Feb 28 04:16:05 crc kubenswrapper[4624]: I0228 04:16:05.096885 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:16:05 crc kubenswrapper[4624]: E0228 04:16:05.097177 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:16:05 crc kubenswrapper[4624]: I0228 04:16:05.457799 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537530-wjkfc"] Feb 28 04:16:05 crc kubenswrapper[4624]: I0228 04:16:05.481836 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537530-wjkfc"] Feb 28 04:16:06 crc kubenswrapper[4624]: I0228 04:16:06.102176 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65556c4-c00e-4478-808e-a3f869783e84" path="/var/lib/kubelet/pods/d65556c4-c00e-4478-808e-a3f869783e84/volumes" Feb 28 04:16:18 crc kubenswrapper[4624]: I0228 04:16:18.087231 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:16:18 crc kubenswrapper[4624]: E0228 04:16:18.088621 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:16:22 crc kubenswrapper[4624]: I0228 04:16:21.999534 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-sms8n" podUID="3293163b-b75d-40f1-b004-8d938c413a4b" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:16:31 crc kubenswrapper[4624]: I0228 04:16:31.087811 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:16:31 crc kubenswrapper[4624]: E0228 04:16:31.090275 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:16:44 crc kubenswrapper[4624]: I0228 04:16:44.087561 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:16:44 crc kubenswrapper[4624]: E0228 04:16:44.088639 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:16:48 crc kubenswrapper[4624]: I0228 04:16:48.449225 4624 generic.go:334] "Generic (PLEG): container finished" podID="9608e724-9bc7-4040-bfd3-29f159075de8" containerID="9d39f5c650849906133d21822054d6a7cdbaf93b9acbc0212e5b5036ab619091" exitCode=0 Feb 28 04:16:48 crc kubenswrapper[4624]: I0228 04:16:48.449342 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" event={"ID":"9608e724-9bc7-4040-bfd3-29f159075de8","Type":"ContainerDied","Data":"9d39f5c650849906133d21822054d6a7cdbaf93b9acbc0212e5b5036ab619091"} Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.902073 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.956157 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-inventory\") pod \"9608e724-9bc7-4040-bfd3-29f159075de8\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.956240 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-ssh-key-openstack-edpm-ipam\") pod \"9608e724-9bc7-4040-bfd3-29f159075de8\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.956451 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-secret-0\") pod \"9608e724-9bc7-4040-bfd3-29f159075de8\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.956484 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-combined-ca-bundle\") pod \"9608e724-9bc7-4040-bfd3-29f159075de8\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.956625 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fznnl\" (UniqueName: \"kubernetes.io/projected/9608e724-9bc7-4040-bfd3-29f159075de8-kube-api-access-fznnl\") pod \"9608e724-9bc7-4040-bfd3-29f159075de8\" (UID: \"9608e724-9bc7-4040-bfd3-29f159075de8\") " Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.965039 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9608e724-9bc7-4040-bfd3-29f159075de8-kube-api-access-fznnl" (OuterVolumeSpecName: "kube-api-access-fznnl") pod "9608e724-9bc7-4040-bfd3-29f159075de8" (UID: "9608e724-9bc7-4040-bfd3-29f159075de8"). InnerVolumeSpecName "kube-api-access-fznnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.976367 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9608e724-9bc7-4040-bfd3-29f159075de8" (UID: "9608e724-9bc7-4040-bfd3-29f159075de8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:16:49 crc kubenswrapper[4624]: I0228 04:16:49.988852 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9608e724-9bc7-4040-bfd3-29f159075de8" (UID: "9608e724-9bc7-4040-bfd3-29f159075de8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.002411 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9608e724-9bc7-4040-bfd3-29f159075de8" (UID: "9608e724-9bc7-4040-bfd3-29f159075de8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.013684 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-inventory" (OuterVolumeSpecName: "inventory") pod "9608e724-9bc7-4040-bfd3-29f159075de8" (UID: "9608e724-9bc7-4040-bfd3-29f159075de8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.058824 4624 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.058860 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fznnl\" (UniqueName: \"kubernetes.io/projected/9608e724-9bc7-4040-bfd3-29f159075de8-kube-api-access-fznnl\") on node \"crc\" DevicePath \"\"" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.058871 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.058880 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.058893 4624 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9608e724-9bc7-4040-bfd3-29f159075de8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.476937 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" event={"ID":"9608e724-9bc7-4040-bfd3-29f159075de8","Type":"ContainerDied","Data":"9e9ac34413ee3cbdae3cf112871d55ecd5e9296a500b861f7d5b0f5c64896e7b"} Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.476987 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e9ac34413ee3cbdae3cf112871d55ecd5e9296a500b861f7d5b0f5c64896e7b" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.477299 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-p42q4" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.573135 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw"] Feb 28 04:16:50 crc kubenswrapper[4624]: E0228 04:16:50.574307 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626e3e8f-5304-4a60-8538-0ab270497329" containerName="oc" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.574513 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="626e3e8f-5304-4a60-8538-0ab270497329" containerName="oc" Feb 28 04:16:50 crc kubenswrapper[4624]: E0228 04:16:50.574645 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9608e724-9bc7-4040-bfd3-29f159075de8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.574760 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9608e724-9bc7-4040-bfd3-29f159075de8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.575356 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9608e724-9bc7-4040-bfd3-29f159075de8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.575537 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="626e3e8f-5304-4a60-8538-0ab270497329" containerName="oc" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.576739 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.579182 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.579552 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.582693 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw"] Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.585383 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.585900 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.586332 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.586332 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.586395 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670355 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670415 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670439 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670503 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670541 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670572 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/15a08883-796d-49b7-a003-66cb6cc51189-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670593 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpntg\" (UniqueName: \"kubernetes.io/projected/15a08883-796d-49b7-a003-66cb6cc51189-kube-api-access-vpntg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670681 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670717 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670765 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.670787 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773149 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773207 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773251 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773298 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773330 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773411 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773455 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773497 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/15a08883-796d-49b7-a003-66cb6cc51189-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773524 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpntg\" (UniqueName: \"kubernetes.io/projected/15a08883-796d-49b7-a003-66cb6cc51189-kube-api-access-vpntg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773566 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.773614 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.774937 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/15a08883-796d-49b7-a003-66cb6cc51189-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.780726 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.781001 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.781501 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.781701 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.782640 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.783527 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.786548 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.786866 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.787468 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.792793 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpntg\" (UniqueName: \"kubernetes.io/projected/15a08883-796d-49b7-a003-66cb6cc51189-kube-api-access-vpntg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fr4lw\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:50 crc kubenswrapper[4624]: I0228 04:16:50.894860 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:16:51 crc kubenswrapper[4624]: I0228 04:16:51.514638 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw"] Feb 28 04:16:51 crc kubenswrapper[4624]: I0228 04:16:51.528800 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:16:52 crc kubenswrapper[4624]: I0228 04:16:52.497625 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" event={"ID":"15a08883-796d-49b7-a003-66cb6cc51189","Type":"ContainerStarted","Data":"ceade85eaf0b4503cb94c23e91054c276ddf271f8254a3970798bae826363007"} Feb 28 04:16:52 crc kubenswrapper[4624]: I0228 04:16:52.498269 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" event={"ID":"15a08883-796d-49b7-a003-66cb6cc51189","Type":"ContainerStarted","Data":"1cc30d4316ae8f73bcdebc2c6337d5c1cfa863dcb0af75893a1b9d768b78b264"} Feb 28 04:16:52 crc kubenswrapper[4624]: I0228 04:16:52.520649 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" podStartSLOduration=1.934184037 podStartE2EDuration="2.520604303s" podCreationTimestamp="2026-02-28 04:16:50 +0000 UTC" firstStartedPulling="2026-02-28 04:16:51.528522394 +0000 UTC m=+2466.192561703" lastFinishedPulling="2026-02-28 04:16:52.11494265 +0000 UTC m=+2466.778981969" observedRunningTime="2026-02-28 04:16:52.516554944 +0000 UTC m=+2467.180594263" watchObservedRunningTime="2026-02-28 04:16:52.520604303 +0000 UTC m=+2467.184643612" Feb 28 04:16:56 crc kubenswrapper[4624]: I0228 04:16:56.096125 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:16:56 crc kubenswrapper[4624]: E0228 04:16:56.097444 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:17:02 crc kubenswrapper[4624]: I0228 04:17:02.065147 4624 scope.go:117] "RemoveContainer" containerID="16b088b1226bc33dd8ced5530e13a2691df082cd51eff4da1b9b788fafe97270" Feb 28 04:17:11 crc kubenswrapper[4624]: I0228 04:17:11.088894 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:17:11 crc kubenswrapper[4624]: E0228 04:17:11.090206 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:17:23 crc kubenswrapper[4624]: I0228 04:17:23.088465 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:17:23 crc kubenswrapper[4624]: E0228 04:17:23.089576 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:17:36 crc kubenswrapper[4624]: I0228 04:17:36.093010 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:17:36 crc kubenswrapper[4624]: E0228 04:17:36.094014 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:17:49 crc kubenswrapper[4624]: I0228 04:17:49.087789 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:17:49 crc kubenswrapper[4624]: E0228 04:17:49.088785 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.139732 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537538-9mscz"] Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.141851 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.149898 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.150168 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.150323 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.151553 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537538-9mscz"] Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.152517 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9brp9\" (UniqueName: \"kubernetes.io/projected/6009d647-0a14-455b-a804-edffda2d3941-kube-api-access-9brp9\") pod \"auto-csr-approver-29537538-9mscz\" (UID: \"6009d647-0a14-455b-a804-edffda2d3941\") " pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.254727 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9brp9\" (UniqueName: \"kubernetes.io/projected/6009d647-0a14-455b-a804-edffda2d3941-kube-api-access-9brp9\") pod \"auto-csr-approver-29537538-9mscz\" (UID: \"6009d647-0a14-455b-a804-edffda2d3941\") " pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.279509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9brp9\" (UniqueName: \"kubernetes.io/projected/6009d647-0a14-455b-a804-edffda2d3941-kube-api-access-9brp9\") pod \"auto-csr-approver-29537538-9mscz\" (UID: \"6009d647-0a14-455b-a804-edffda2d3941\") " pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.463561 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:00 crc kubenswrapper[4624]: I0228 04:18:00.920167 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537538-9mscz"] Feb 28 04:18:01 crc kubenswrapper[4624]: I0228 04:18:01.087748 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:18:01 crc kubenswrapper[4624]: E0228 04:18:01.088534 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:18:01 crc kubenswrapper[4624]: I0228 04:18:01.238180 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537538-9mscz" event={"ID":"6009d647-0a14-455b-a804-edffda2d3941","Type":"ContainerStarted","Data":"8e8d1537350d7e3a9c9d22ad95b767a48eaa39d073b30d293da4158c4cfbd262"} Feb 28 04:18:02 crc kubenswrapper[4624]: I0228 04:18:02.262200 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537538-9mscz" event={"ID":"6009d647-0a14-455b-a804-edffda2d3941","Type":"ContainerStarted","Data":"dd0d591ce6666f475f6fe6fe873d5bb71f7e69ee908b1f27b332dd4fad247e26"} Feb 28 04:18:02 crc kubenswrapper[4624]: I0228 04:18:02.291244 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537538-9mscz" podStartSLOduration=1.440779177 podStartE2EDuration="2.291221169s" podCreationTimestamp="2026-02-28 04:18:00 +0000 UTC" firstStartedPulling="2026-02-28 04:18:00.920902438 +0000 UTC m=+2535.584941747" lastFinishedPulling="2026-02-28 04:18:01.77134443 +0000 UTC m=+2536.435383739" observedRunningTime="2026-02-28 04:18:02.2786606 +0000 UTC m=+2536.942699919" watchObservedRunningTime="2026-02-28 04:18:02.291221169 +0000 UTC m=+2536.955260478" Feb 28 04:18:03 crc kubenswrapper[4624]: I0228 04:18:03.271633 4624 generic.go:334] "Generic (PLEG): container finished" podID="6009d647-0a14-455b-a804-edffda2d3941" containerID="dd0d591ce6666f475f6fe6fe873d5bb71f7e69ee908b1f27b332dd4fad247e26" exitCode=0 Feb 28 04:18:03 crc kubenswrapper[4624]: I0228 04:18:03.271862 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537538-9mscz" event={"ID":"6009d647-0a14-455b-a804-edffda2d3941","Type":"ContainerDied","Data":"dd0d591ce6666f475f6fe6fe873d5bb71f7e69ee908b1f27b332dd4fad247e26"} Feb 28 04:18:04 crc kubenswrapper[4624]: I0228 04:18:04.635667 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:04 crc kubenswrapper[4624]: I0228 04:18:04.747221 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9brp9\" (UniqueName: \"kubernetes.io/projected/6009d647-0a14-455b-a804-edffda2d3941-kube-api-access-9brp9\") pod \"6009d647-0a14-455b-a804-edffda2d3941\" (UID: \"6009d647-0a14-455b-a804-edffda2d3941\") " Feb 28 04:18:04 crc kubenswrapper[4624]: I0228 04:18:04.755868 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6009d647-0a14-455b-a804-edffda2d3941-kube-api-access-9brp9" (OuterVolumeSpecName: "kube-api-access-9brp9") pod "6009d647-0a14-455b-a804-edffda2d3941" (UID: "6009d647-0a14-455b-a804-edffda2d3941"). InnerVolumeSpecName "kube-api-access-9brp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:18:04 crc kubenswrapper[4624]: I0228 04:18:04.850500 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9brp9\" (UniqueName: \"kubernetes.io/projected/6009d647-0a14-455b-a804-edffda2d3941-kube-api-access-9brp9\") on node \"crc\" DevicePath \"\"" Feb 28 04:18:05 crc kubenswrapper[4624]: I0228 04:18:05.297399 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537538-9mscz" event={"ID":"6009d647-0a14-455b-a804-edffda2d3941","Type":"ContainerDied","Data":"8e8d1537350d7e3a9c9d22ad95b767a48eaa39d073b30d293da4158c4cfbd262"} Feb 28 04:18:05 crc kubenswrapper[4624]: I0228 04:18:05.297724 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8d1537350d7e3a9c9d22ad95b767a48eaa39d073b30d293da4158c4cfbd262" Feb 28 04:18:05 crc kubenswrapper[4624]: I0228 04:18:05.297486 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537538-9mscz" Feb 28 04:18:05 crc kubenswrapper[4624]: I0228 04:18:05.373932 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537532-klq9j"] Feb 28 04:18:05 crc kubenswrapper[4624]: I0228 04:18:05.381983 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537532-klq9j"] Feb 28 04:18:06 crc kubenswrapper[4624]: I0228 04:18:06.097688 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22f2130-3836-4da5-9fb1-1095decd41b2" path="/var/lib/kubelet/pods/a22f2130-3836-4da5-9fb1-1095decd41b2/volumes" Feb 28 04:18:12 crc kubenswrapper[4624]: I0228 04:18:12.087772 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:18:12 crc kubenswrapper[4624]: E0228 04:18:12.088785 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:18:25 crc kubenswrapper[4624]: I0228 04:18:25.088266 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:18:25 crc kubenswrapper[4624]: E0228 04:18:25.089180 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:18:37 crc kubenswrapper[4624]: I0228 04:18:37.087050 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:18:37 crc kubenswrapper[4624]: E0228 04:18:37.088033 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:18:49 crc kubenswrapper[4624]: I0228 04:18:49.088604 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:18:49 crc kubenswrapper[4624]: E0228 04:18:49.089485 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:19:02 crc kubenswrapper[4624]: I0228 04:19:02.190240 4624 scope.go:117] "RemoveContainer" containerID="d3bff60e90c6ebb30b4cac9cd31cc5b59cebe4251b72ab1d3ae849c8a217414d" Feb 28 04:19:03 crc kubenswrapper[4624]: I0228 04:19:03.087968 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:19:03 crc kubenswrapper[4624]: I0228 04:19:03.886454 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"fcf1ca1dfe79f2c158561d47710b70b5dfaf73e2fc1d5b04c1b6a36fd5c53020"} Feb 28 04:19:37 crc kubenswrapper[4624]: I0228 04:19:37.265981 4624 generic.go:334] "Generic (PLEG): container finished" podID="15a08883-796d-49b7-a003-66cb6cc51189" containerID="ceade85eaf0b4503cb94c23e91054c276ddf271f8254a3970798bae826363007" exitCode=0 Feb 28 04:19:37 crc kubenswrapper[4624]: I0228 04:19:37.266124 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" event={"ID":"15a08883-796d-49b7-a003-66cb6cc51189","Type":"ContainerDied","Data":"ceade85eaf0b4503cb94c23e91054c276ddf271f8254a3970798bae826363007"} Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.693828 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783318 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-2\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783446 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-1\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783501 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-0\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783558 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-inventory\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783677 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpntg\" (UniqueName: \"kubernetes.io/projected/15a08883-796d-49b7-a003-66cb6cc51189-kube-api-access-vpntg\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783728 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/15a08883-796d-49b7-a003-66cb6cc51189-nova-extra-config-0\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783771 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-1\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783815 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-ssh-key-openstack-edpm-ipam\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783855 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-combined-ca-bundle\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783932 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-3\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.783967 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-0\") pod \"15a08883-796d-49b7-a003-66cb6cc51189\" (UID: \"15a08883-796d-49b7-a003-66cb6cc51189\") " Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.810322 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a08883-796d-49b7-a003-66cb6cc51189-kube-api-access-vpntg" (OuterVolumeSpecName: "kube-api-access-vpntg") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "kube-api-access-vpntg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.817365 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.832453 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.833346 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.833745 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.837416 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-inventory" (OuterVolumeSpecName: "inventory") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.843394 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a08883-796d-49b7-a003-66cb6cc51189-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.849244 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.850931 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.863283 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.864223 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "15a08883-796d-49b7-a003-66cb6cc51189" (UID: "15a08883-796d-49b7-a003-66cb6cc51189"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887401 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpntg\" (UniqueName: \"kubernetes.io/projected/15a08883-796d-49b7-a003-66cb6cc51189-kube-api-access-vpntg\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887436 4624 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/15a08883-796d-49b7-a003-66cb6cc51189-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887447 4624 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887457 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887466 4624 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887477 4624 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887485 4624 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887495 4624 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887503 4624 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887512 4624 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:38 crc kubenswrapper[4624]: I0228 04:19:38.887521 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15a08883-796d-49b7-a003-66cb6cc51189-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.285363 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" event={"ID":"15a08883-796d-49b7-a003-66cb6cc51189","Type":"ContainerDied","Data":"1cc30d4316ae8f73bcdebc2c6337d5c1cfa863dcb0af75893a1b9d768b78b264"} Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.286155 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc30d4316ae8f73bcdebc2c6337d5c1cfa863dcb0af75893a1b9d768b78b264" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.286316 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fr4lw" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.447334 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq"] Feb 28 04:19:39 crc kubenswrapper[4624]: E0228 04:19:39.448063 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a08883-796d-49b7-a003-66cb6cc51189" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.448135 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a08883-796d-49b7-a003-66cb6cc51189" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 04:19:39 crc kubenswrapper[4624]: E0228 04:19:39.448181 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6009d647-0a14-455b-a804-edffda2d3941" containerName="oc" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.448191 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="6009d647-0a14-455b-a804-edffda2d3941" containerName="oc" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.448433 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a08883-796d-49b7-a003-66cb6cc51189" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.448467 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="6009d647-0a14-455b-a804-edffda2d3941" containerName="oc" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.450408 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.453283 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.453343 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.453646 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.454134 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pb95n" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.454519 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.468907 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq"] Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615094 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615147 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615404 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615532 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8rwd\" (UniqueName: \"kubernetes.io/projected/2588d2da-daa4-4eb7-b706-25290e0840c7-kube-api-access-q8rwd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615623 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615721 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.615956 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718238 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718295 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718333 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718372 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8rwd\" (UniqueName: \"kubernetes.io/projected/2588d2da-daa4-4eb7-b706-25290e0840c7-kube-api-access-q8rwd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718402 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718425 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.718470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.723667 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.724243 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.724398 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.725955 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.734228 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8rwd\" (UniqueName: \"kubernetes.io/projected/2588d2da-daa4-4eb7-b706-25290e0840c7-kube-api-access-q8rwd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.737893 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.739509 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:39 crc kubenswrapper[4624]: I0228 04:19:39.773604 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:19:40 crc kubenswrapper[4624]: I0228 04:19:40.387293 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq"] Feb 28 04:19:41 crc kubenswrapper[4624]: I0228 04:19:41.310869 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" event={"ID":"2588d2da-daa4-4eb7-b706-25290e0840c7","Type":"ContainerStarted","Data":"e89b643920ae5ed8077602711a6487f3cd27ac0e14460286c91fcc56c1704eb1"} Feb 28 04:19:41 crc kubenswrapper[4624]: I0228 04:19:41.311571 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" event={"ID":"2588d2da-daa4-4eb7-b706-25290e0840c7","Type":"ContainerStarted","Data":"05749a5727e8e40bc9ef02b1e54818621582ddbf8805a97be7d8e5ae519c8bcf"} Feb 28 04:19:41 crc kubenswrapper[4624]: I0228 04:19:41.339110 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" podStartSLOduration=1.8164743840000002 podStartE2EDuration="2.339060692s" podCreationTimestamp="2026-02-28 04:19:39 +0000 UTC" firstStartedPulling="2026-02-28 04:19:40.381717377 +0000 UTC m=+2635.045756686" lastFinishedPulling="2026-02-28 04:19:40.904303675 +0000 UTC m=+2635.568342994" observedRunningTime="2026-02-28 04:19:41.330056629 +0000 UTC m=+2635.994095938" watchObservedRunningTime="2026-02-28 04:19:41.339060692 +0000 UTC m=+2636.003100011" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.143226 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537540-6v8nf"] Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.145273 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.147618 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.150421 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.153889 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537540-6v8nf"] Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.154382 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.322295 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8zr\" (UniqueName: \"kubernetes.io/projected/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10-kube-api-access-jp8zr\") pod \"auto-csr-approver-29537540-6v8nf\" (UID: \"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10\") " pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.423690 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8zr\" (UniqueName: \"kubernetes.io/projected/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10-kube-api-access-jp8zr\") pod \"auto-csr-approver-29537540-6v8nf\" (UID: \"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10\") " pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.447752 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8zr\" (UniqueName: \"kubernetes.io/projected/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10-kube-api-access-jp8zr\") pod \"auto-csr-approver-29537540-6v8nf\" (UID: \"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10\") " pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:00 crc kubenswrapper[4624]: I0228 04:20:00.530988 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:01 crc kubenswrapper[4624]: I0228 04:20:01.012904 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537540-6v8nf"] Feb 28 04:20:01 crc kubenswrapper[4624]: I0228 04:20:01.503598 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" event={"ID":"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10","Type":"ContainerStarted","Data":"e87cf38b4a5a133500de40425acf4bc368ca6b0034ad4bb8e309951a97c9e311"} Feb 28 04:20:02 crc kubenswrapper[4624]: I0228 04:20:02.517475 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" event={"ID":"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10","Type":"ContainerStarted","Data":"e4e45a03135cb7489d62c371152ee3e2ecd713ead015310db28c6756c57e6a82"} Feb 28 04:20:02 crc kubenswrapper[4624]: I0228 04:20:02.543782 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" podStartSLOduration=1.6352093060000001 podStartE2EDuration="2.543751183s" podCreationTimestamp="2026-02-28 04:20:00 +0000 UTC" firstStartedPulling="2026-02-28 04:20:01.023961224 +0000 UTC m=+2655.688000533" lastFinishedPulling="2026-02-28 04:20:01.932503101 +0000 UTC m=+2656.596542410" observedRunningTime="2026-02-28 04:20:02.531907023 +0000 UTC m=+2657.195946342" watchObservedRunningTime="2026-02-28 04:20:02.543751183 +0000 UTC m=+2657.207790492" Feb 28 04:20:03 crc kubenswrapper[4624]: I0228 04:20:03.531339 4624 generic.go:334] "Generic (PLEG): container finished" podID="e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10" containerID="e4e45a03135cb7489d62c371152ee3e2ecd713ead015310db28c6756c57e6a82" exitCode=0 Feb 28 04:20:03 crc kubenswrapper[4624]: I0228 04:20:03.531468 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" event={"ID":"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10","Type":"ContainerDied","Data":"e4e45a03135cb7489d62c371152ee3e2ecd713ead015310db28c6756c57e6a82"} Feb 28 04:20:04 crc kubenswrapper[4624]: I0228 04:20:04.963275 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.048883 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp8zr\" (UniqueName: \"kubernetes.io/projected/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10-kube-api-access-jp8zr\") pod \"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10\" (UID: \"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10\") " Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.064484 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10-kube-api-access-jp8zr" (OuterVolumeSpecName: "kube-api-access-jp8zr") pod "e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10" (UID: "e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10"). InnerVolumeSpecName "kube-api-access-jp8zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.151572 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp8zr\" (UniqueName: \"kubernetes.io/projected/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10-kube-api-access-jp8zr\") on node \"crc\" DevicePath \"\"" Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.560679 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" event={"ID":"e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10","Type":"ContainerDied","Data":"e87cf38b4a5a133500de40425acf4bc368ca6b0034ad4bb8e309951a97c9e311"} Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.560740 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e87cf38b4a5a133500de40425acf4bc368ca6b0034ad4bb8e309951a97c9e311" Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.560750 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537540-6v8nf" Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.655755 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537534-qtdsr"] Feb 28 04:20:05 crc kubenswrapper[4624]: I0228 04:20:05.667753 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537534-qtdsr"] Feb 28 04:20:06 crc kubenswrapper[4624]: I0228 04:20:06.104979 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c726c37e-59dc-4efb-ac43-697fcf8755e5" path="/var/lib/kubelet/pods/c726c37e-59dc-4efb-ac43-697fcf8755e5/volumes" Feb 28 04:21:02 crc kubenswrapper[4624]: I0228 04:21:02.315008 4624 scope.go:117] "RemoveContainer" containerID="bebc2d44c48672600df2e35350bed6ba19e65c5d993ec76ed3ecdf76297aa40c" Feb 28 04:21:19 crc kubenswrapper[4624]: I0228 04:21:19.539636 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:21:19 crc kubenswrapper[4624]: I0228 04:21:19.540451 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:21:49 crc kubenswrapper[4624]: I0228 04:21:49.540164 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:21:49 crc kubenswrapper[4624]: I0228 04:21:49.541550 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.155689 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537542-psh46"] Feb 28 04:22:00 crc kubenswrapper[4624]: E0228 04:22:00.156867 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10" containerName="oc" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.156881 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10" containerName="oc" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.157096 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10" containerName="oc" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.157682 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.160797 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.161285 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.161795 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.175035 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537542-psh46"] Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.252255 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwkq\" (UniqueName: \"kubernetes.io/projected/8a28442d-b6e0-4152-95e0-0032edfde9dc-kube-api-access-snwkq\") pod \"auto-csr-approver-29537542-psh46\" (UID: \"8a28442d-b6e0-4152-95e0-0032edfde9dc\") " pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.361548 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwkq\" (UniqueName: \"kubernetes.io/projected/8a28442d-b6e0-4152-95e0-0032edfde9dc-kube-api-access-snwkq\") pod \"auto-csr-approver-29537542-psh46\" (UID: \"8a28442d-b6e0-4152-95e0-0032edfde9dc\") " pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.395479 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwkq\" (UniqueName: \"kubernetes.io/projected/8a28442d-b6e0-4152-95e0-0032edfde9dc-kube-api-access-snwkq\") pod \"auto-csr-approver-29537542-psh46\" (UID: \"8a28442d-b6e0-4152-95e0-0032edfde9dc\") " pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.477056 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:00 crc kubenswrapper[4624]: I0228 04:22:00.989804 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537542-psh46"] Feb 28 04:22:01 crc kubenswrapper[4624]: I0228 04:22:01.007452 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:22:01 crc kubenswrapper[4624]: I0228 04:22:01.830712 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537542-psh46" event={"ID":"8a28442d-b6e0-4152-95e0-0032edfde9dc","Type":"ContainerStarted","Data":"a73780d61005fce8879ac1276ac8a20b67bb95eb9456dbb9e1788ccdef86f317"} Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.519872 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-99s26"] Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.523906 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.530237 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99s26"] Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.628420 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-catalog-content\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.628548 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdss\" (UniqueName: \"kubernetes.io/projected/266c192f-620c-45b5-a22d-55b150d2b5d2-kube-api-access-lzdss\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.628822 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-utilities\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.731538 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdss\" (UniqueName: \"kubernetes.io/projected/266c192f-620c-45b5-a22d-55b150d2b5d2-kube-api-access-lzdss\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.731677 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-utilities\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.731738 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-catalog-content\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.732789 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-catalog-content\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.733197 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-utilities\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.756853 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdss\" (UniqueName: \"kubernetes.io/projected/266c192f-620c-45b5-a22d-55b150d2b5d2-kube-api-access-lzdss\") pod \"certified-operators-99s26\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.843016 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.847658 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537542-psh46" event={"ID":"8a28442d-b6e0-4152-95e0-0032edfde9dc","Type":"ContainerStarted","Data":"7c66289e3f8b86814e1b056c661fb406102436c3404b429710aba21243fd059d"} Feb 28 04:22:02 crc kubenswrapper[4624]: I0228 04:22:02.880623 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537542-psh46" podStartSLOduration=1.763735676 podStartE2EDuration="2.880582196s" podCreationTimestamp="2026-02-28 04:22:00 +0000 UTC" firstStartedPulling="2026-02-28 04:22:01.006787341 +0000 UTC m=+2775.670826650" lastFinishedPulling="2026-02-28 04:22:02.123633861 +0000 UTC m=+2776.787673170" observedRunningTime="2026-02-28 04:22:02.87479935 +0000 UTC m=+2777.538838659" watchObservedRunningTime="2026-02-28 04:22:02.880582196 +0000 UTC m=+2777.544621505" Feb 28 04:22:03 crc kubenswrapper[4624]: I0228 04:22:03.459701 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99s26"] Feb 28 04:22:03 crc kubenswrapper[4624]: I0228 04:22:03.859172 4624 generic.go:334] "Generic (PLEG): container finished" podID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerID="a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0" exitCode=0 Feb 28 04:22:03 crc kubenswrapper[4624]: I0228 04:22:03.859284 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99s26" event={"ID":"266c192f-620c-45b5-a22d-55b150d2b5d2","Type":"ContainerDied","Data":"a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0"} Feb 28 04:22:03 crc kubenswrapper[4624]: I0228 04:22:03.859588 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99s26" event={"ID":"266c192f-620c-45b5-a22d-55b150d2b5d2","Type":"ContainerStarted","Data":"9f185257c7aa4b8b63f0b0411e00ba89b4c4bb1444a375cb1ad820ce4ab4b7b3"} Feb 28 04:22:03 crc kubenswrapper[4624]: I0228 04:22:03.865296 4624 generic.go:334] "Generic (PLEG): container finished" podID="8a28442d-b6e0-4152-95e0-0032edfde9dc" containerID="7c66289e3f8b86814e1b056c661fb406102436c3404b429710aba21243fd059d" exitCode=0 Feb 28 04:22:03 crc kubenswrapper[4624]: I0228 04:22:03.865373 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537542-psh46" event={"ID":"8a28442d-b6e0-4152-95e0-0032edfde9dc","Type":"ContainerDied","Data":"7c66289e3f8b86814e1b056c661fb406102436c3404b429710aba21243fd059d"} Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.322132 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.416892 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snwkq\" (UniqueName: \"kubernetes.io/projected/8a28442d-b6e0-4152-95e0-0032edfde9dc-kube-api-access-snwkq\") pod \"8a28442d-b6e0-4152-95e0-0032edfde9dc\" (UID: \"8a28442d-b6e0-4152-95e0-0032edfde9dc\") " Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.427469 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a28442d-b6e0-4152-95e0-0032edfde9dc-kube-api-access-snwkq" (OuterVolumeSpecName: "kube-api-access-snwkq") pod "8a28442d-b6e0-4152-95e0-0032edfde9dc" (UID: "8a28442d-b6e0-4152-95e0-0032edfde9dc"). InnerVolumeSpecName "kube-api-access-snwkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.524019 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snwkq\" (UniqueName: \"kubernetes.io/projected/8a28442d-b6e0-4152-95e0-0032edfde9dc-kube-api-access-snwkq\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.898630 4624 generic.go:334] "Generic (PLEG): container finished" podID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerID="ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4" exitCode=0 Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.898743 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99s26" event={"ID":"266c192f-620c-45b5-a22d-55b150d2b5d2","Type":"ContainerDied","Data":"ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4"} Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.906888 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537542-psh46" event={"ID":"8a28442d-b6e0-4152-95e0-0032edfde9dc","Type":"ContainerDied","Data":"a73780d61005fce8879ac1276ac8a20b67bb95eb9456dbb9e1788ccdef86f317"} Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.906939 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a73780d61005fce8879ac1276ac8a20b67bb95eb9456dbb9e1788ccdef86f317" Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.907020 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537542-psh46" Feb 28 04:22:05 crc kubenswrapper[4624]: I0228 04:22:05.998218 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537536-qzxnm"] Feb 28 04:22:06 crc kubenswrapper[4624]: I0228 04:22:06.014715 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537536-qzxnm"] Feb 28 04:22:06 crc kubenswrapper[4624]: I0228 04:22:06.106191 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626e3e8f-5304-4a60-8538-0ab270497329" path="/var/lib/kubelet/pods/626e3e8f-5304-4a60-8538-0ab270497329/volumes" Feb 28 04:22:06 crc kubenswrapper[4624]: I0228 04:22:06.930169 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99s26" event={"ID":"266c192f-620c-45b5-a22d-55b150d2b5d2","Type":"ContainerStarted","Data":"585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150"} Feb 28 04:22:06 crc kubenswrapper[4624]: I0228 04:22:06.958595 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-99s26" podStartSLOduration=2.388853972 podStartE2EDuration="4.958571916s" podCreationTimestamp="2026-02-28 04:22:02 +0000 UTC" firstStartedPulling="2026-02-28 04:22:03.861176208 +0000 UTC m=+2778.525215517" lastFinishedPulling="2026-02-28 04:22:06.430894122 +0000 UTC m=+2781.094933461" observedRunningTime="2026-02-28 04:22:06.949563154 +0000 UTC m=+2781.613602463" watchObservedRunningTime="2026-02-28 04:22:06.958571916 +0000 UTC m=+2781.622611215" Feb 28 04:22:12 crc kubenswrapper[4624]: I0228 04:22:12.844430 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:12 crc kubenswrapper[4624]: I0228 04:22:12.845064 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:12 crc kubenswrapper[4624]: I0228 04:22:12.918000 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:13 crc kubenswrapper[4624]: I0228 04:22:13.061579 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:13 crc kubenswrapper[4624]: I0228 04:22:13.159516 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99s26"] Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.047115 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-99s26" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="registry-server" containerID="cri-o://585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150" gracePeriod=2 Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.550513 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.644412 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-utilities\") pod \"266c192f-620c-45b5-a22d-55b150d2b5d2\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.644488 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-catalog-content\") pod \"266c192f-620c-45b5-a22d-55b150d2b5d2\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.644670 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdss\" (UniqueName: \"kubernetes.io/projected/266c192f-620c-45b5-a22d-55b150d2b5d2-kube-api-access-lzdss\") pod \"266c192f-620c-45b5-a22d-55b150d2b5d2\" (UID: \"266c192f-620c-45b5-a22d-55b150d2b5d2\") " Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.646303 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-utilities" (OuterVolumeSpecName: "utilities") pod "266c192f-620c-45b5-a22d-55b150d2b5d2" (UID: "266c192f-620c-45b5-a22d-55b150d2b5d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.648841 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.652474 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266c192f-620c-45b5-a22d-55b150d2b5d2-kube-api-access-lzdss" (OuterVolumeSpecName: "kube-api-access-lzdss") pod "266c192f-620c-45b5-a22d-55b150d2b5d2" (UID: "266c192f-620c-45b5-a22d-55b150d2b5d2"). InnerVolumeSpecName "kube-api-access-lzdss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.750490 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzdss\" (UniqueName: \"kubernetes.io/projected/266c192f-620c-45b5-a22d-55b150d2b5d2-kube-api-access-lzdss\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.903272 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "266c192f-620c-45b5-a22d-55b150d2b5d2" (UID: "266c192f-620c-45b5-a22d-55b150d2b5d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:22:15 crc kubenswrapper[4624]: I0228 04:22:15.955599 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266c192f-620c-45b5-a22d-55b150d2b5d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.060817 4624 generic.go:334] "Generic (PLEG): container finished" podID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerID="585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150" exitCode=0 Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.060873 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99s26" event={"ID":"266c192f-620c-45b5-a22d-55b150d2b5d2","Type":"ContainerDied","Data":"585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150"} Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.060900 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99s26" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.060920 4624 scope.go:117] "RemoveContainer" containerID="585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.060905 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99s26" event={"ID":"266c192f-620c-45b5-a22d-55b150d2b5d2","Type":"ContainerDied","Data":"9f185257c7aa4b8b63f0b0411e00ba89b4c4bb1444a375cb1ad820ce4ab4b7b3"} Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.106722 4624 scope.go:117] "RemoveContainer" containerID="ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.124278 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99s26"] Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.126227 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-99s26"] Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.133459 4624 scope.go:117] "RemoveContainer" containerID="a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.175605 4624 scope.go:117] "RemoveContainer" containerID="585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150" Feb 28 04:22:16 crc kubenswrapper[4624]: E0228 04:22:16.176112 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150\": container with ID starting with 585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150 not found: ID does not exist" containerID="585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.176158 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150"} err="failed to get container status \"585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150\": rpc error: code = NotFound desc = could not find container \"585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150\": container with ID starting with 585fa96cc165ec490efef1f4a60cecc63ba12e462a74b4c6f68e881d5f1f9150 not found: ID does not exist" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.176188 4624 scope.go:117] "RemoveContainer" containerID="ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4" Feb 28 04:22:16 crc kubenswrapper[4624]: E0228 04:22:16.176679 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4\": container with ID starting with ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4 not found: ID does not exist" containerID="ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.176700 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4"} err="failed to get container status \"ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4\": rpc error: code = NotFound desc = could not find container \"ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4\": container with ID starting with ce11fbb8ac4ea457420148fd41aca1c3ad00f198e42f3a4ae3a2a6bf7c84d3f4 not found: ID does not exist" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.176713 4624 scope.go:117] "RemoveContainer" containerID="a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0" Feb 28 04:22:16 crc kubenswrapper[4624]: E0228 04:22:16.177156 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0\": container with ID starting with a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0 not found: ID does not exist" containerID="a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0" Feb 28 04:22:16 crc kubenswrapper[4624]: I0228 04:22:16.177175 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0"} err="failed to get container status \"a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0\": rpc error: code = NotFound desc = could not find container \"a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0\": container with ID starting with a0a57167fb39758fe6aec25bab9cdac057bb3c42a8abcf9fe27f81d587a049e0 not found: ID does not exist" Feb 28 04:22:18 crc kubenswrapper[4624]: I0228 04:22:18.100626 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" path="/var/lib/kubelet/pods/266c192f-620c-45b5-a22d-55b150d2b5d2/volumes" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.539937 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.539996 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.540040 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.540562 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcf1ca1dfe79f2c158561d47710b70b5dfaf73e2fc1d5b04c1b6a36fd5c53020"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.540615 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://fcf1ca1dfe79f2c158561d47710b70b5dfaf73e2fc1d5b04c1b6a36fd5c53020" gracePeriod=600 Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.579565 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vwj52"] Feb 28 04:22:19 crc kubenswrapper[4624]: E0228 04:22:19.579954 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="extract-content" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.579969 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="extract-content" Feb 28 04:22:19 crc kubenswrapper[4624]: E0228 04:22:19.579991 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="extract-utilities" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.579997 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="extract-utilities" Feb 28 04:22:19 crc kubenswrapper[4624]: E0228 04:22:19.580009 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="registry-server" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.580016 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="registry-server" Feb 28 04:22:19 crc kubenswrapper[4624]: E0228 04:22:19.580027 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a28442d-b6e0-4152-95e0-0032edfde9dc" containerName="oc" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.580032 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a28442d-b6e0-4152-95e0-0032edfde9dc" containerName="oc" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.580248 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="266c192f-620c-45b5-a22d-55b150d2b5d2" containerName="registry-server" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.580265 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a28442d-b6e0-4152-95e0-0032edfde9dc" containerName="oc" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.581494 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.602050 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwj52"] Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.733865 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-catalog-content\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.734386 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnsgt\" (UniqueName: \"kubernetes.io/projected/9fe25248-cd69-4cd7-ba31-1a4177d6a643-kube-api-access-fnsgt\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.734416 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-utilities\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.836236 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-catalog-content\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.836512 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnsgt\" (UniqueName: \"kubernetes.io/projected/9fe25248-cd69-4cd7-ba31-1a4177d6a643-kube-api-access-fnsgt\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.836624 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-utilities\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.836914 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-catalog-content\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.837207 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-utilities\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.863689 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnsgt\" (UniqueName: \"kubernetes.io/projected/9fe25248-cd69-4cd7-ba31-1a4177d6a643-kube-api-access-fnsgt\") pod \"redhat-operators-vwj52\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:19 crc kubenswrapper[4624]: I0228 04:22:19.907314 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:20 crc kubenswrapper[4624]: I0228 04:22:20.173691 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="fcf1ca1dfe79f2c158561d47710b70b5dfaf73e2fc1d5b04c1b6a36fd5c53020" exitCode=0 Feb 28 04:22:20 crc kubenswrapper[4624]: I0228 04:22:20.173771 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"fcf1ca1dfe79f2c158561d47710b70b5dfaf73e2fc1d5b04c1b6a36fd5c53020"} Feb 28 04:22:20 crc kubenswrapper[4624]: I0228 04:22:20.174288 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4"} Feb 28 04:22:20 crc kubenswrapper[4624]: I0228 04:22:20.174315 4624 scope.go:117] "RemoveContainer" containerID="21103d4f5ca0ba51ee67898de01fd86695691bc13629bf4a4f4b886503b18127" Feb 28 04:22:20 crc kubenswrapper[4624]: W0228 04:22:20.523375 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe25248_cd69_4cd7_ba31_1a4177d6a643.slice/crio-7496cab44e01d888656c1250d53bd1174e957811776577054c7f715601ece1a2 WatchSource:0}: Error finding container 7496cab44e01d888656c1250d53bd1174e957811776577054c7f715601ece1a2: Status 404 returned error can't find the container with id 7496cab44e01d888656c1250d53bd1174e957811776577054c7f715601ece1a2 Feb 28 04:22:20 crc kubenswrapper[4624]: I0228 04:22:20.523754 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vwj52"] Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.173110 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nq9mx"] Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.175485 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.185768 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerID="a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46" exitCode=0 Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.185822 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerDied","Data":"a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46"} Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.185870 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerStarted","Data":"7496cab44e01d888656c1250d53bd1174e957811776577054c7f715601ece1a2"} Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.189639 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nq9mx"] Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.268188 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-utilities\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.268380 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dp8l\" (UniqueName: \"kubernetes.io/projected/e366f15f-00d0-4c96-8907-89fdf5b0389a-kube-api-access-8dp8l\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.268492 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-catalog-content\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.370675 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dp8l\" (UniqueName: \"kubernetes.io/projected/e366f15f-00d0-4c96-8907-89fdf5b0389a-kube-api-access-8dp8l\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.370775 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-catalog-content\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.370857 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-utilities\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.371573 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-catalog-content\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.374574 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-utilities\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.423832 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dp8l\" (UniqueName: \"kubernetes.io/projected/e366f15f-00d0-4c96-8907-89fdf5b0389a-kube-api-access-8dp8l\") pod \"community-operators-nq9mx\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.498806 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:21 crc kubenswrapper[4624]: I0228 04:22:21.952920 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nq9mx"] Feb 28 04:22:22 crc kubenswrapper[4624]: I0228 04:22:22.206641 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerStarted","Data":"9272c87b80988d0824e0ca66fbcbdf5b66c32d2f9be10e79e4c8598c5a08845a"} Feb 28 04:22:23 crc kubenswrapper[4624]: I0228 04:22:23.219637 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerStarted","Data":"00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8"} Feb 28 04:22:23 crc kubenswrapper[4624]: I0228 04:22:23.224764 4624 generic.go:334] "Generic (PLEG): container finished" podID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerID="f19f9f8843c8961afa02292019db4a6e103696131a38d50bc3867664f46c2081" exitCode=0 Feb 28 04:22:23 crc kubenswrapper[4624]: I0228 04:22:23.224850 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerDied","Data":"f19f9f8843c8961afa02292019db4a6e103696131a38d50bc3867664f46c2081"} Feb 28 04:22:25 crc kubenswrapper[4624]: I0228 04:22:25.245069 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerStarted","Data":"62dc5f77aa910e5cfaa919b0c35c1cd1fc1cf248077cdd58f8ce8338e114e311"} Feb 28 04:22:27 crc kubenswrapper[4624]: I0228 04:22:27.267296 4624 generic.go:334] "Generic (PLEG): container finished" podID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerID="62dc5f77aa910e5cfaa919b0c35c1cd1fc1cf248077cdd58f8ce8338e114e311" exitCode=0 Feb 28 04:22:27 crc kubenswrapper[4624]: I0228 04:22:27.267380 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerDied","Data":"62dc5f77aa910e5cfaa919b0c35c1cd1fc1cf248077cdd58f8ce8338e114e311"} Feb 28 04:22:28 crc kubenswrapper[4624]: I0228 04:22:28.285742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerStarted","Data":"05230e362b7f9c4e9d1d0b9b7790c8e73008efe864077163746b3f79e2180478"} Feb 28 04:22:28 crc kubenswrapper[4624]: I0228 04:22:28.291524 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerID="00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8" exitCode=0 Feb 28 04:22:28 crc kubenswrapper[4624]: I0228 04:22:28.291586 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerDied","Data":"00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8"} Feb 28 04:22:28 crc kubenswrapper[4624]: I0228 04:22:28.317933 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nq9mx" podStartSLOduration=2.796670464 podStartE2EDuration="7.317907261s" podCreationTimestamp="2026-02-28 04:22:21 +0000 UTC" firstStartedPulling="2026-02-28 04:22:23.22794834 +0000 UTC m=+2797.891987649" lastFinishedPulling="2026-02-28 04:22:27.749185137 +0000 UTC m=+2802.413224446" observedRunningTime="2026-02-28 04:22:28.311742124 +0000 UTC m=+2802.975781453" watchObservedRunningTime="2026-02-28 04:22:28.317907261 +0000 UTC m=+2802.981946570" Feb 28 04:22:29 crc kubenswrapper[4624]: I0228 04:22:29.305579 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerStarted","Data":"dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428"} Feb 28 04:22:29 crc kubenswrapper[4624]: I0228 04:22:29.907858 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:29 crc kubenswrapper[4624]: I0228 04:22:29.908244 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:22:30 crc kubenswrapper[4624]: I0228 04:22:30.994302 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vwj52" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" probeResult="failure" output=< Feb 28 04:22:30 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:22:30 crc kubenswrapper[4624]: > Feb 28 04:22:31 crc kubenswrapper[4624]: I0228 04:22:31.499615 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:31 crc kubenswrapper[4624]: I0228 04:22:31.499711 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:31 crc kubenswrapper[4624]: I0228 04:22:31.558681 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:31 crc kubenswrapper[4624]: I0228 04:22:31.592412 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vwj52" podStartSLOduration=5.053687691 podStartE2EDuration="12.592381969s" podCreationTimestamp="2026-02-28 04:22:19 +0000 UTC" firstStartedPulling="2026-02-28 04:22:21.187761792 +0000 UTC m=+2795.851801101" lastFinishedPulling="2026-02-28 04:22:28.72645607 +0000 UTC m=+2803.390495379" observedRunningTime="2026-02-28 04:22:29.339765177 +0000 UTC m=+2804.003804486" watchObservedRunningTime="2026-02-28 04:22:31.592381969 +0000 UTC m=+2806.256421288" Feb 28 04:22:32 crc kubenswrapper[4624]: I0228 04:22:32.400428 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.162660 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nq9mx"] Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.163309 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nq9mx" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="registry-server" containerID="cri-o://05230e362b7f9c4e9d1d0b9b7790c8e73008efe864077163746b3f79e2180478" gracePeriod=2 Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.407571 4624 generic.go:334] "Generic (PLEG): container finished" podID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerID="05230e362b7f9c4e9d1d0b9b7790c8e73008efe864077163746b3f79e2180478" exitCode=0 Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.407670 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerDied","Data":"05230e362b7f9c4e9d1d0b9b7790c8e73008efe864077163746b3f79e2180478"} Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.792161 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.926684 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-utilities\") pod \"e366f15f-00d0-4c96-8907-89fdf5b0389a\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.927212 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-catalog-content\") pod \"e366f15f-00d0-4c96-8907-89fdf5b0389a\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.927234 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dp8l\" (UniqueName: \"kubernetes.io/projected/e366f15f-00d0-4c96-8907-89fdf5b0389a-kube-api-access-8dp8l\") pod \"e366f15f-00d0-4c96-8907-89fdf5b0389a\" (UID: \"e366f15f-00d0-4c96-8907-89fdf5b0389a\") " Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.927653 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-utilities" (OuterVolumeSpecName: "utilities") pod "e366f15f-00d0-4c96-8907-89fdf5b0389a" (UID: "e366f15f-00d0-4c96-8907-89fdf5b0389a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.948455 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e366f15f-00d0-4c96-8907-89fdf5b0389a-kube-api-access-8dp8l" (OuterVolumeSpecName: "kube-api-access-8dp8l") pod "e366f15f-00d0-4c96-8907-89fdf5b0389a" (UID: "e366f15f-00d0-4c96-8907-89fdf5b0389a"). InnerVolumeSpecName "kube-api-access-8dp8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:22:35 crc kubenswrapper[4624]: I0228 04:22:35.984989 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e366f15f-00d0-4c96-8907-89fdf5b0389a" (UID: "e366f15f-00d0-4c96-8907-89fdf5b0389a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.029171 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.029199 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e366f15f-00d0-4c96-8907-89fdf5b0389a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.029212 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dp8l\" (UniqueName: \"kubernetes.io/projected/e366f15f-00d0-4c96-8907-89fdf5b0389a-kube-api-access-8dp8l\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.429505 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nq9mx" event={"ID":"e366f15f-00d0-4c96-8907-89fdf5b0389a","Type":"ContainerDied","Data":"9272c87b80988d0824e0ca66fbcbdf5b66c32d2f9be10e79e4c8598c5a08845a"} Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.429601 4624 scope.go:117] "RemoveContainer" containerID="05230e362b7f9c4e9d1d0b9b7790c8e73008efe864077163746b3f79e2180478" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.430919 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nq9mx" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.465895 4624 scope.go:117] "RemoveContainer" containerID="62dc5f77aa910e5cfaa919b0c35c1cd1fc1cf248077cdd58f8ce8338e114e311" Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.466683 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nq9mx"] Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.477122 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nq9mx"] Feb 28 04:22:36 crc kubenswrapper[4624]: I0228 04:22:36.505857 4624 scope.go:117] "RemoveContainer" containerID="f19f9f8843c8961afa02292019db4a6e103696131a38d50bc3867664f46c2081" Feb 28 04:22:38 crc kubenswrapper[4624]: I0228 04:22:38.102217 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" path="/var/lib/kubelet/pods/e366f15f-00d0-4c96-8907-89fdf5b0389a/volumes" Feb 28 04:22:40 crc kubenswrapper[4624]: I0228 04:22:40.962374 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vwj52" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" probeResult="failure" output=< Feb 28 04:22:40 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:22:40 crc kubenswrapper[4624]: > Feb 28 04:22:50 crc kubenswrapper[4624]: I0228 04:22:50.952228 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vwj52" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" probeResult="failure" output=< Feb 28 04:22:50 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:22:50 crc kubenswrapper[4624]: > Feb 28 04:22:52 crc kubenswrapper[4624]: I0228 04:22:52.595741 4624 generic.go:334] "Generic (PLEG): container finished" podID="2588d2da-daa4-4eb7-b706-25290e0840c7" containerID="e89b643920ae5ed8077602711a6487f3cd27ac0e14460286c91fcc56c1704eb1" exitCode=0 Feb 28 04:22:52 crc kubenswrapper[4624]: I0228 04:22:52.595812 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" event={"ID":"2588d2da-daa4-4eb7-b706-25290e0840c7","Type":"ContainerDied","Data":"e89b643920ae5ed8077602711a6487f3cd27ac0e14460286c91fcc56c1704eb1"} Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.121377 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.202724 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-1\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.202787 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-0\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.202807 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-telemetry-combined-ca-bundle\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.202830 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8rwd\" (UniqueName: \"kubernetes.io/projected/2588d2da-daa4-4eb7-b706-25290e0840c7-kube-api-access-q8rwd\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.202849 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-inventory\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.203048 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-2\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.203136 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ssh-key-openstack-edpm-ipam\") pod \"2588d2da-daa4-4eb7-b706-25290e0840c7\" (UID: \"2588d2da-daa4-4eb7-b706-25290e0840c7\") " Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.214129 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.244158 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2588d2da-daa4-4eb7-b706-25290e0840c7-kube-api-access-q8rwd" (OuterVolumeSpecName: "kube-api-access-q8rwd") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "kube-api-access-q8rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.244850 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.251340 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.264124 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-inventory" (OuterVolumeSpecName: "inventory") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.271607 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.282210 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2588d2da-daa4-4eb7-b706-25290e0840c7" (UID: "2588d2da-daa4-4eb7-b706-25290e0840c7"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314206 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314643 4624 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314735 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8rwd\" (UniqueName: \"kubernetes.io/projected/2588d2da-daa4-4eb7-b706-25290e0840c7-kube-api-access-q8rwd\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314790 4624 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314843 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314902 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.314990 4624 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2588d2da-daa4-4eb7-b706-25290e0840c7-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.620766 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" event={"ID":"2588d2da-daa4-4eb7-b706-25290e0840c7","Type":"ContainerDied","Data":"05749a5727e8e40bc9ef02b1e54818621582ddbf8805a97be7d8e5ae519c8bcf"} Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.620834 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05749a5727e8e40bc9ef02b1e54818621582ddbf8805a97be7d8e5ae519c8bcf" Feb 28 04:22:54 crc kubenswrapper[4624]: I0228 04:22:54.620913 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq" Feb 28 04:22:59 crc kubenswrapper[4624]: I0228 04:22:59.977977 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:23:00 crc kubenswrapper[4624]: I0228 04:23:00.063982 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:23:00 crc kubenswrapper[4624]: I0228 04:23:00.225329 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vwj52"] Feb 28 04:23:01 crc kubenswrapper[4624]: I0228 04:23:01.707559 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vwj52" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" containerID="cri-o://dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428" gracePeriod=2 Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.255726 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.323132 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-utilities\") pod \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.323574 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-catalog-content\") pod \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.323747 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnsgt\" (UniqueName: \"kubernetes.io/projected/9fe25248-cd69-4cd7-ba31-1a4177d6a643-kube-api-access-fnsgt\") pod \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\" (UID: \"9fe25248-cd69-4cd7-ba31-1a4177d6a643\") " Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.329748 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-utilities" (OuterVolumeSpecName: "utilities") pod "9fe25248-cd69-4cd7-ba31-1a4177d6a643" (UID: "9fe25248-cd69-4cd7-ba31-1a4177d6a643"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.345718 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe25248-cd69-4cd7-ba31-1a4177d6a643-kube-api-access-fnsgt" (OuterVolumeSpecName: "kube-api-access-fnsgt") pod "9fe25248-cd69-4cd7-ba31-1a4177d6a643" (UID: "9fe25248-cd69-4cd7-ba31-1a4177d6a643"). InnerVolumeSpecName "kube-api-access-fnsgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.428241 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnsgt\" (UniqueName: \"kubernetes.io/projected/9fe25248-cd69-4cd7-ba31-1a4177d6a643-kube-api-access-fnsgt\") on node \"crc\" DevicePath \"\"" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.428278 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.475998 4624 scope.go:117] "RemoveContainer" containerID="da9eec1f701d66e25eff326a724e9ffec374244faaac7ce6cf42d17bbbc2c94f" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.502291 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fe25248-cd69-4cd7-ba31-1a4177d6a643" (UID: "9fe25248-cd69-4cd7-ba31-1a4177d6a643"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.531147 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fe25248-cd69-4cd7-ba31-1a4177d6a643-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.723667 4624 generic.go:334] "Generic (PLEG): container finished" podID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerID="dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428" exitCode=0 Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.723767 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vwj52" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.723814 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerDied","Data":"dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428"} Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.723876 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vwj52" event={"ID":"9fe25248-cd69-4cd7-ba31-1a4177d6a643","Type":"ContainerDied","Data":"7496cab44e01d888656c1250d53bd1174e957811776577054c7f715601ece1a2"} Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.723941 4624 scope.go:117] "RemoveContainer" containerID="dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.769732 4624 scope.go:117] "RemoveContainer" containerID="00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.778625 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vwj52"] Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.791706 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vwj52"] Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.815320 4624 scope.go:117] "RemoveContainer" containerID="a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.856380 4624 scope.go:117] "RemoveContainer" containerID="dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428" Feb 28 04:23:02 crc kubenswrapper[4624]: E0228 04:23:02.857040 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428\": container with ID starting with dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428 not found: ID does not exist" containerID="dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.857131 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428"} err="failed to get container status \"dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428\": rpc error: code = NotFound desc = could not find container \"dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428\": container with ID starting with dd2a0c89047bc17a27d38a079c9448f7def6f42540a2a6a1a0c2de634df20428 not found: ID does not exist" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.857185 4624 scope.go:117] "RemoveContainer" containerID="00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8" Feb 28 04:23:02 crc kubenswrapper[4624]: E0228 04:23:02.857649 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8\": container with ID starting with 00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8 not found: ID does not exist" containerID="00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.857682 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8"} err="failed to get container status \"00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8\": rpc error: code = NotFound desc = could not find container \"00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8\": container with ID starting with 00bdc2327c2c3a3ddcf064ae1cbc043023f5e769e64d9aa23babc6aa077017b8 not found: ID does not exist" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.857699 4624 scope.go:117] "RemoveContainer" containerID="a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46" Feb 28 04:23:02 crc kubenswrapper[4624]: E0228 04:23:02.858101 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46\": container with ID starting with a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46 not found: ID does not exist" containerID="a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46" Feb 28 04:23:02 crc kubenswrapper[4624]: I0228 04:23:02.858122 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46"} err="failed to get container status \"a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46\": rpc error: code = NotFound desc = could not find container \"a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46\": container with ID starting with a7ff8ee6b52504ab86a88abeea984551af64ecd2c0f934d071199717a5702b46 not found: ID does not exist" Feb 28 04:23:04 crc kubenswrapper[4624]: I0228 04:23:04.100237 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" path="/var/lib/kubelet/pods/9fe25248-cd69-4cd7-ba31-1a4177d6a643/volumes" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.781512 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.782979 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="extract-utilities" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.782996 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="extract-utilities" Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.783031 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="extract-content" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783039 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="extract-content" Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.783054 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="registry-server" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783065 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="registry-server" Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.783102 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783109 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.783131 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2588d2da-daa4-4eb7-b706-25290e0840c7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783140 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="2588d2da-daa4-4eb7-b706-25290e0840c7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.783154 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="extract-content" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783163 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="extract-content" Feb 28 04:23:45 crc kubenswrapper[4624]: E0228 04:23:45.783200 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="extract-utilities" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783206 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="extract-utilities" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783466 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="2588d2da-daa4-4eb7-b706-25290e0840c7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783507 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe25248-cd69-4cd7-ba31-1a4177d6a643" containerName="registry-server" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.783522 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e366f15f-00d0-4c96-8907-89fdf5b0389a" containerName="registry-server" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.803715 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.815918 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.816132 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.816282 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.816441 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k5ddr" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.825001 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.874565 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.874633 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.874741 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-config-data\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.977541 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.977963 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.978158 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.978346 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.978486 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c2th\" (UniqueName: \"kubernetes.io/projected/8687164b-ff55-49e1-ae97-79d38c05f861-kube-api-access-6c2th\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.978613 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.978730 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.979767 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.979954 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-config-data\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.981915 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-config-data\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.982918 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:45 crc kubenswrapper[4624]: I0228 04:23:45.993871 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.084459 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.085053 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.085471 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.089586 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.089710 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.089786 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c2th\" (UniqueName: \"kubernetes.io/projected/8687164b-ff55-49e1-ae97-79d38c05f861-kube-api-access-6c2th\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.089990 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.090172 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.090766 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.096446 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.107929 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.109311 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.115667 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c2th\" (UniqueName: \"kubernetes.io/projected/8687164b-ff55-49e1-ae97-79d38c05f861-kube-api-access-6c2th\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.150911 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.450870 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k5ddr" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.468321 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 04:23:46 crc kubenswrapper[4624]: I0228 04:23:46.950896 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 04:23:47 crc kubenswrapper[4624]: I0228 04:23:47.555386 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8687164b-ff55-49e1-ae97-79d38c05f861","Type":"ContainerStarted","Data":"b90ced3c5ee418b31a50043fcac48bff6458d84d1ecb1834bc057f818b6c2420"} Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.160443 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537544-sqhfj"] Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.163968 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.170599 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.174526 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.174816 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.182772 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537544-sqhfj"] Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.319331 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfvl\" (UniqueName: \"kubernetes.io/projected/e328e122-3e66-465c-a05d-8fddc9b01fce-kube-api-access-wsfvl\") pod \"auto-csr-approver-29537544-sqhfj\" (UID: \"e328e122-3e66-465c-a05d-8fddc9b01fce\") " pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.421347 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfvl\" (UniqueName: \"kubernetes.io/projected/e328e122-3e66-465c-a05d-8fddc9b01fce-kube-api-access-wsfvl\") pod \"auto-csr-approver-29537544-sqhfj\" (UID: \"e328e122-3e66-465c-a05d-8fddc9b01fce\") " pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.444817 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfvl\" (UniqueName: \"kubernetes.io/projected/e328e122-3e66-465c-a05d-8fddc9b01fce-kube-api-access-wsfvl\") pod \"auto-csr-approver-29537544-sqhfj\" (UID: \"e328e122-3e66-465c-a05d-8fddc9b01fce\") " pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:00 crc kubenswrapper[4624]: I0228 04:24:00.508061 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:01 crc kubenswrapper[4624]: I0228 04:24:01.308407 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537544-sqhfj"] Feb 28 04:24:01 crc kubenswrapper[4624]: I0228 04:24:01.794608 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" event={"ID":"e328e122-3e66-465c-a05d-8fddc9b01fce","Type":"ContainerStarted","Data":"a04baa112b8bc44a2981ef37207203e9518865886f2c4a2fd1dc131057a1f18f"} Feb 28 04:24:02 crc kubenswrapper[4624]: I0228 04:24:02.828631 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" event={"ID":"e328e122-3e66-465c-a05d-8fddc9b01fce","Type":"ContainerStarted","Data":"b9e1e23576f53adc11960b9a462d9264ea66e5857743a47e5b828e05d8c8613f"} Feb 28 04:24:02 crc kubenswrapper[4624]: I0228 04:24:02.858294 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" podStartSLOduration=1.860239262 podStartE2EDuration="2.858267802s" podCreationTimestamp="2026-02-28 04:24:00 +0000 UTC" firstStartedPulling="2026-02-28 04:24:01.317895645 +0000 UTC m=+2895.981934954" lastFinishedPulling="2026-02-28 04:24:02.315924185 +0000 UTC m=+2896.979963494" observedRunningTime="2026-02-28 04:24:02.847151373 +0000 UTC m=+2897.511190692" watchObservedRunningTime="2026-02-28 04:24:02.858267802 +0000 UTC m=+2897.522307111" Feb 28 04:24:03 crc kubenswrapper[4624]: I0228 04:24:03.859706 4624 generic.go:334] "Generic (PLEG): container finished" podID="e328e122-3e66-465c-a05d-8fddc9b01fce" containerID="b9e1e23576f53adc11960b9a462d9264ea66e5857743a47e5b828e05d8c8613f" exitCode=0 Feb 28 04:24:03 crc kubenswrapper[4624]: I0228 04:24:03.861178 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" event={"ID":"e328e122-3e66-465c-a05d-8fddc9b01fce","Type":"ContainerDied","Data":"b9e1e23576f53adc11960b9a462d9264ea66e5857743a47e5b828e05d8c8613f"} Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.341031 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xg8gw"] Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.344563 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.386877 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg8gw"] Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.438164 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7pdl\" (UniqueName: \"kubernetes.io/projected/34791b78-1508-4fbf-abba-4b9b8ee498c7-kube-api-access-l7pdl\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.438279 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-catalog-content\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.438376 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-utilities\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.541598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-catalog-content\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.541718 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-utilities\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.541816 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7pdl\" (UniqueName: \"kubernetes.io/projected/34791b78-1508-4fbf-abba-4b9b8ee498c7-kube-api-access-l7pdl\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.542295 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-catalog-content\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.542798 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-utilities\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.592281 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7pdl\" (UniqueName: \"kubernetes.io/projected/34791b78-1508-4fbf-abba-4b9b8ee498c7-kube-api-access-l7pdl\") pod \"redhat-marketplace-xg8gw\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:08 crc kubenswrapper[4624]: I0228 04:24:08.691551 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:19 crc kubenswrapper[4624]: I0228 04:24:19.539870 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:24:19 crc kubenswrapper[4624]: I0228 04:24:19.540301 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:24:36 crc kubenswrapper[4624]: E0228 04:24:36.244405 4624 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 28 04:24:36 crc kubenswrapper[4624]: E0228 04:24:36.248515 4624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c2th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(8687164b-ff55-49e1-ae97-79d38c05f861): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:24:36 crc kubenswrapper[4624]: E0228 04:24:36.249941 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="8687164b-ff55-49e1-ae97-79d38c05f861" Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.286418 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" event={"ID":"e328e122-3e66-465c-a05d-8fddc9b01fce","Type":"ContainerDied","Data":"a04baa112b8bc44a2981ef37207203e9518865886f2c4a2fd1dc131057a1f18f"} Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.286501 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a04baa112b8bc44a2981ef37207203e9518865886f2c4a2fd1dc131057a1f18f" Feb 28 04:24:36 crc kubenswrapper[4624]: E0228 04:24:36.289554 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="8687164b-ff55-49e1-ae97-79d38c05f861" Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.290074 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.327668 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfvl\" (UniqueName: \"kubernetes.io/projected/e328e122-3e66-465c-a05d-8fddc9b01fce-kube-api-access-wsfvl\") pod \"e328e122-3e66-465c-a05d-8fddc9b01fce\" (UID: \"e328e122-3e66-465c-a05d-8fddc9b01fce\") " Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.366237 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e328e122-3e66-465c-a05d-8fddc9b01fce-kube-api-access-wsfvl" (OuterVolumeSpecName: "kube-api-access-wsfvl") pod "e328e122-3e66-465c-a05d-8fddc9b01fce" (UID: "e328e122-3e66-465c-a05d-8fddc9b01fce"). InnerVolumeSpecName "kube-api-access-wsfvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.431488 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfvl\" (UniqueName: \"kubernetes.io/projected/e328e122-3e66-465c-a05d-8fddc9b01fce-kube-api-access-wsfvl\") on node \"crc\" DevicePath \"\"" Feb 28 04:24:36 crc kubenswrapper[4624]: I0228 04:24:36.801881 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg8gw"] Feb 28 04:24:37 crc kubenswrapper[4624]: I0228 04:24:37.297342 4624 generic.go:334] "Generic (PLEG): container finished" podID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerID="32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3" exitCode=0 Feb 28 04:24:37 crc kubenswrapper[4624]: I0228 04:24:37.297817 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537544-sqhfj" Feb 28 04:24:37 crc kubenswrapper[4624]: I0228 04:24:37.297553 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerDied","Data":"32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3"} Feb 28 04:24:37 crc kubenswrapper[4624]: I0228 04:24:37.298512 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerStarted","Data":"8eab383694a71eae2a7829340835b47ac47dee44f3b2893713e97cdddafb4098"} Feb 28 04:24:37 crc kubenswrapper[4624]: I0228 04:24:37.466017 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537538-9mscz"] Feb 28 04:24:37 crc kubenswrapper[4624]: I0228 04:24:37.475471 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537538-9mscz"] Feb 28 04:24:38 crc kubenswrapper[4624]: I0228 04:24:38.113938 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6009d647-0a14-455b-a804-edffda2d3941" path="/var/lib/kubelet/pods/6009d647-0a14-455b-a804-edffda2d3941/volumes" Feb 28 04:24:39 crc kubenswrapper[4624]: I0228 04:24:39.321223 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerStarted","Data":"9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3"} Feb 28 04:24:40 crc kubenswrapper[4624]: I0228 04:24:40.340961 4624 generic.go:334] "Generic (PLEG): container finished" podID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerID="9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3" exitCode=0 Feb 28 04:24:40 crc kubenswrapper[4624]: I0228 04:24:40.341331 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerDied","Data":"9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3"} Feb 28 04:24:41 crc kubenswrapper[4624]: I0228 04:24:41.355911 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerStarted","Data":"0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004"} Feb 28 04:24:41 crc kubenswrapper[4624]: I0228 04:24:41.399346 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xg8gw" podStartSLOduration=29.949129944 podStartE2EDuration="33.399318145s" podCreationTimestamp="2026-02-28 04:24:08 +0000 UTC" firstStartedPulling="2026-02-28 04:24:37.300433188 +0000 UTC m=+2931.964472497" lastFinishedPulling="2026-02-28 04:24:40.750621379 +0000 UTC m=+2935.414660698" observedRunningTime="2026-02-28 04:24:41.379737857 +0000 UTC m=+2936.043777206" watchObservedRunningTime="2026-02-28 04:24:41.399318145 +0000 UTC m=+2936.063357464" Feb 28 04:24:48 crc kubenswrapper[4624]: I0228 04:24:48.692795 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:48 crc kubenswrapper[4624]: I0228 04:24:48.693415 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:48 crc kubenswrapper[4624]: I0228 04:24:48.777838 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:49 crc kubenswrapper[4624]: I0228 04:24:49.496639 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:49 crc kubenswrapper[4624]: I0228 04:24:49.539681 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:24:49 crc kubenswrapper[4624]: I0228 04:24:49.539836 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:24:49 crc kubenswrapper[4624]: I0228 04:24:49.566405 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg8gw"] Feb 28 04:24:50 crc kubenswrapper[4624]: I0228 04:24:50.872682 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 04:24:51 crc kubenswrapper[4624]: I0228 04:24:51.469601 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xg8gw" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="registry-server" containerID="cri-o://0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004" gracePeriod=2 Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.023501 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.162238 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-utilities\") pod \"34791b78-1508-4fbf-abba-4b9b8ee498c7\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.163433 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-utilities" (OuterVolumeSpecName: "utilities") pod "34791b78-1508-4fbf-abba-4b9b8ee498c7" (UID: "34791b78-1508-4fbf-abba-4b9b8ee498c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.163719 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-catalog-content\") pod \"34791b78-1508-4fbf-abba-4b9b8ee498c7\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.163929 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7pdl\" (UniqueName: \"kubernetes.io/projected/34791b78-1508-4fbf-abba-4b9b8ee498c7-kube-api-access-l7pdl\") pod \"34791b78-1508-4fbf-abba-4b9b8ee498c7\" (UID: \"34791b78-1508-4fbf-abba-4b9b8ee498c7\") " Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.164834 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.220709 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34791b78-1508-4fbf-abba-4b9b8ee498c7-kube-api-access-l7pdl" (OuterVolumeSpecName: "kube-api-access-l7pdl") pod "34791b78-1508-4fbf-abba-4b9b8ee498c7" (UID: "34791b78-1508-4fbf-abba-4b9b8ee498c7"). InnerVolumeSpecName "kube-api-access-l7pdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.277784 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7pdl\" (UniqueName: \"kubernetes.io/projected/34791b78-1508-4fbf-abba-4b9b8ee498c7-kube-api-access-l7pdl\") on node \"crc\" DevicePath \"\"" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.289423 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34791b78-1508-4fbf-abba-4b9b8ee498c7" (UID: "34791b78-1508-4fbf-abba-4b9b8ee498c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.380157 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34791b78-1508-4fbf-abba-4b9b8ee498c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.484747 4624 generic.go:334] "Generic (PLEG): container finished" podID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerID="0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004" exitCode=0 Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.484862 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerDied","Data":"0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004"} Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.485137 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg8gw" event={"ID":"34791b78-1508-4fbf-abba-4b9b8ee498c7","Type":"ContainerDied","Data":"8eab383694a71eae2a7829340835b47ac47dee44f3b2893713e97cdddafb4098"} Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.485168 4624 scope.go:117] "RemoveContainer" containerID="0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.484940 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg8gw" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.531022 4624 scope.go:117] "RemoveContainer" containerID="9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.557359 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg8gw"] Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.560832 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg8gw"] Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.567352 4624 scope.go:117] "RemoveContainer" containerID="32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.622625 4624 scope.go:117] "RemoveContainer" containerID="0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004" Feb 28 04:24:52 crc kubenswrapper[4624]: E0228 04:24:52.623272 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004\": container with ID starting with 0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004 not found: ID does not exist" containerID="0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.623317 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004"} err="failed to get container status \"0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004\": rpc error: code = NotFound desc = could not find container \"0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004\": container with ID starting with 0f5db025a00be0cbf8631198f5913f7ca18fc05dc4790b1a59810d3ed2f4d004 not found: ID does not exist" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.623345 4624 scope.go:117] "RemoveContainer" containerID="9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3" Feb 28 04:24:52 crc kubenswrapper[4624]: E0228 04:24:52.623841 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3\": container with ID starting with 9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3 not found: ID does not exist" containerID="9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.623898 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3"} err="failed to get container status \"9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3\": rpc error: code = NotFound desc = could not find container \"9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3\": container with ID starting with 9c7a66b31d9d782b4166eda8ff256add070c3f292b5c583ed88415a258f883f3 not found: ID does not exist" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.623945 4624 scope.go:117] "RemoveContainer" containerID="32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3" Feb 28 04:24:52 crc kubenswrapper[4624]: E0228 04:24:52.624766 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3\": container with ID starting with 32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3 not found: ID does not exist" containerID="32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3" Feb 28 04:24:52 crc kubenswrapper[4624]: I0228 04:24:52.624812 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3"} err="failed to get container status \"32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3\": rpc error: code = NotFound desc = could not find container \"32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3\": container with ID starting with 32c1b7c9b7a3fb25f4cf61faa3782ce8ff116eca25246acff5b1a0f7614002b3 not found: ID does not exist" Feb 28 04:24:53 crc kubenswrapper[4624]: I0228 04:24:53.500995 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8687164b-ff55-49e1-ae97-79d38c05f861","Type":"ContainerStarted","Data":"adf970f6875eba47960d53bbb55c6e7a270950fa74eaf7ab32a4a539a2eb70cf"} Feb 28 04:24:53 crc kubenswrapper[4624]: I0228 04:24:53.520761 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.618652141 podStartE2EDuration="1m9.520740188s" podCreationTimestamp="2026-02-28 04:23:44 +0000 UTC" firstStartedPulling="2026-02-28 04:23:46.966926737 +0000 UTC m=+2881.630966066" lastFinishedPulling="2026-02-28 04:24:50.869014774 +0000 UTC m=+2945.533054113" observedRunningTime="2026-02-28 04:24:53.51787752 +0000 UTC m=+2948.181916829" watchObservedRunningTime="2026-02-28 04:24:53.520740188 +0000 UTC m=+2948.184779497" Feb 28 04:24:54 crc kubenswrapper[4624]: I0228 04:24:54.101772 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" path="/var/lib/kubelet/pods/34791b78-1508-4fbf-abba-4b9b8ee498c7/volumes" Feb 28 04:25:02 crc kubenswrapper[4624]: I0228 04:25:02.674930 4624 scope.go:117] "RemoveContainer" containerID="dd0d591ce6666f475f6fe6fe873d5bb71f7e69ee908b1f27b332dd4fad247e26" Feb 28 04:25:19 crc kubenswrapper[4624]: I0228 04:25:19.540168 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:25:19 crc kubenswrapper[4624]: I0228 04:25:19.540884 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:25:19 crc kubenswrapper[4624]: I0228 04:25:19.540951 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:25:19 crc kubenswrapper[4624]: I0228 04:25:19.542124 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:25:19 crc kubenswrapper[4624]: I0228 04:25:19.542216 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" gracePeriod=600 Feb 28 04:25:20 crc kubenswrapper[4624]: I0228 04:25:19.786989 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" exitCode=0 Feb 28 04:25:20 crc kubenswrapper[4624]: I0228 04:25:19.787138 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4"} Feb 28 04:25:20 crc kubenswrapper[4624]: I0228 04:25:19.787575 4624 scope.go:117] "RemoveContainer" containerID="fcf1ca1dfe79f2c158561d47710b70b5dfaf73e2fc1d5b04c1b6a36fd5c53020" Feb 28 04:25:20 crc kubenswrapper[4624]: E0228 04:25:20.172891 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:25:20 crc kubenswrapper[4624]: I0228 04:25:20.804826 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:25:20 crc kubenswrapper[4624]: E0228 04:25:20.805510 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:25:31 crc kubenswrapper[4624]: I0228 04:25:31.088069 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:25:31 crc kubenswrapper[4624]: E0228 04:25:31.088979 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:25:31 crc kubenswrapper[4624]: E0228 04:25:31.905390 4624 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 28 04:25:46 crc kubenswrapper[4624]: I0228 04:25:46.096495 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:25:46 crc kubenswrapper[4624]: E0228 04:25:46.097531 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.088301 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:26:00 crc kubenswrapper[4624]: E0228 04:26:00.089119 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.147747 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537546-6bb9s"] Feb 28 04:26:00 crc kubenswrapper[4624]: E0228 04:26:00.148228 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="extract-utilities" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.148246 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="extract-utilities" Feb 28 04:26:00 crc kubenswrapper[4624]: E0228 04:26:00.148264 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e328e122-3e66-465c-a05d-8fddc9b01fce" containerName="oc" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.148270 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e328e122-3e66-465c-a05d-8fddc9b01fce" containerName="oc" Feb 28 04:26:00 crc kubenswrapper[4624]: E0228 04:26:00.148277 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="registry-server" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.148283 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="registry-server" Feb 28 04:26:00 crc kubenswrapper[4624]: E0228 04:26:00.148316 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="extract-content" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.148323 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="extract-content" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.148529 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e328e122-3e66-465c-a05d-8fddc9b01fce" containerName="oc" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.148544 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="34791b78-1508-4fbf-abba-4b9b8ee498c7" containerName="registry-server" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.149250 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.158503 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537546-6bb9s"] Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.160720 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.161643 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.161697 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.301410 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zmv\" (UniqueName: \"kubernetes.io/projected/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c-kube-api-access-c5zmv\") pod \"auto-csr-approver-29537546-6bb9s\" (UID: \"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c\") " pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.403465 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5zmv\" (UniqueName: \"kubernetes.io/projected/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c-kube-api-access-c5zmv\") pod \"auto-csr-approver-29537546-6bb9s\" (UID: \"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c\") " pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.430422 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5zmv\" (UniqueName: \"kubernetes.io/projected/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c-kube-api-access-c5zmv\") pod \"auto-csr-approver-29537546-6bb9s\" (UID: \"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c\") " pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:00 crc kubenswrapper[4624]: I0228 04:26:00.467968 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:01 crc kubenswrapper[4624]: W0228 04:26:01.019607 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3308cd5_0ea1_40c6_b2ef_b5b1e379af2c.slice/crio-7ac253f62ef57f1129665e39fb536bb5a2fba00a84b946aaaeec6447cdc086c8 WatchSource:0}: Error finding container 7ac253f62ef57f1129665e39fb536bb5a2fba00a84b946aaaeec6447cdc086c8: Status 404 returned error can't find the container with id 7ac253f62ef57f1129665e39fb536bb5a2fba00a84b946aaaeec6447cdc086c8 Feb 28 04:26:01 crc kubenswrapper[4624]: I0228 04:26:01.022760 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537546-6bb9s"] Feb 28 04:26:01 crc kubenswrapper[4624]: I0228 04:26:01.255263 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" event={"ID":"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c","Type":"ContainerStarted","Data":"7ac253f62ef57f1129665e39fb536bb5a2fba00a84b946aaaeec6447cdc086c8"} Feb 28 04:26:03 crc kubenswrapper[4624]: I0228 04:26:03.278426 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" event={"ID":"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c","Type":"ContainerStarted","Data":"cafedfd73f71ed29499c1e42da6c1c544da86524d47f06f2ced614f314fffcad"} Feb 28 04:26:03 crc kubenswrapper[4624]: I0228 04:26:03.302864 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" podStartSLOduration=2.2687509820000002 podStartE2EDuration="3.302823223s" podCreationTimestamp="2026-02-28 04:26:00 +0000 UTC" firstStartedPulling="2026-02-28 04:26:01.042512568 +0000 UTC m=+3015.706551877" lastFinishedPulling="2026-02-28 04:26:02.076584799 +0000 UTC m=+3016.740624118" observedRunningTime="2026-02-28 04:26:03.293335468 +0000 UTC m=+3017.957374777" watchObservedRunningTime="2026-02-28 04:26:03.302823223 +0000 UTC m=+3017.966862542" Feb 28 04:26:04 crc kubenswrapper[4624]: I0228 04:26:04.295318 4624 generic.go:334] "Generic (PLEG): container finished" podID="e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c" containerID="cafedfd73f71ed29499c1e42da6c1c544da86524d47f06f2ced614f314fffcad" exitCode=0 Feb 28 04:26:04 crc kubenswrapper[4624]: I0228 04:26:04.295500 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" event={"ID":"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c","Type":"ContainerDied","Data":"cafedfd73f71ed29499c1e42da6c1c544da86524d47f06f2ced614f314fffcad"} Feb 28 04:26:05 crc kubenswrapper[4624]: I0228 04:26:05.744717 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:05 crc kubenswrapper[4624]: I0228 04:26:05.784724 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5zmv\" (UniqueName: \"kubernetes.io/projected/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c-kube-api-access-c5zmv\") pod \"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c\" (UID: \"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c\") " Feb 28 04:26:05 crc kubenswrapper[4624]: I0228 04:26:05.817995 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c-kube-api-access-c5zmv" (OuterVolumeSpecName: "kube-api-access-c5zmv") pod "e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c" (UID: "e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c"). InnerVolumeSpecName "kube-api-access-c5zmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:26:05 crc kubenswrapper[4624]: I0228 04:26:05.887973 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5zmv\" (UniqueName: \"kubernetes.io/projected/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c-kube-api-access-c5zmv\") on node \"crc\" DevicePath \"\"" Feb 28 04:26:06 crc kubenswrapper[4624]: I0228 04:26:06.336011 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" event={"ID":"e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c","Type":"ContainerDied","Data":"7ac253f62ef57f1129665e39fb536bb5a2fba00a84b946aaaeec6447cdc086c8"} Feb 28 04:26:06 crc kubenswrapper[4624]: I0228 04:26:06.336536 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ac253f62ef57f1129665e39fb536bb5a2fba00a84b946aaaeec6447cdc086c8" Feb 28 04:26:06 crc kubenswrapper[4624]: I0228 04:26:06.336119 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537546-6bb9s" Feb 28 04:26:06 crc kubenswrapper[4624]: I0228 04:26:06.409476 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537540-6v8nf"] Feb 28 04:26:06 crc kubenswrapper[4624]: I0228 04:26:06.428100 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537540-6v8nf"] Feb 28 04:26:08 crc kubenswrapper[4624]: I0228 04:26:08.103702 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10" path="/var/lib/kubelet/pods/e2d0eeeb-08a5-44fd-9deb-3fc62c59aa10/volumes" Feb 28 04:26:12 crc kubenswrapper[4624]: I0228 04:26:12.088220 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:26:12 crc kubenswrapper[4624]: E0228 04:26:12.089034 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:26:23 crc kubenswrapper[4624]: I0228 04:26:23.087295 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:26:23 crc kubenswrapper[4624]: E0228 04:26:23.088384 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:26:38 crc kubenswrapper[4624]: I0228 04:26:38.087400 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:26:38 crc kubenswrapper[4624]: E0228 04:26:38.090367 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:26:52 crc kubenswrapper[4624]: I0228 04:26:52.089346 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:26:52 crc kubenswrapper[4624]: E0228 04:26:52.090278 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:27:02 crc kubenswrapper[4624]: I0228 04:27:02.836911 4624 scope.go:117] "RemoveContainer" containerID="e4e45a03135cb7489d62c371152ee3e2ecd713ead015310db28c6756c57e6a82" Feb 28 04:27:05 crc kubenswrapper[4624]: I0228 04:27:05.087669 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:27:05 crc kubenswrapper[4624]: E0228 04:27:05.088426 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:27:20 crc kubenswrapper[4624]: I0228 04:27:20.089563 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:27:20 crc kubenswrapper[4624]: E0228 04:27:20.090467 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:27:35 crc kubenswrapper[4624]: I0228 04:27:35.090688 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:27:35 crc kubenswrapper[4624]: E0228 04:27:35.091489 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:27:50 crc kubenswrapper[4624]: I0228 04:27:50.087820 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:27:50 crc kubenswrapper[4624]: E0228 04:27:50.090423 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.193307 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537548-dkvwp"] Feb 28 04:28:00 crc kubenswrapper[4624]: E0228 04:28:00.195933 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c" containerName="oc" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.196014 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c" containerName="oc" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.196329 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c" containerName="oc" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.197339 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.201496 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.201724 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.201772 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.204600 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537548-dkvwp"] Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.252446 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6qs\" (UniqueName: \"kubernetes.io/projected/264bd128-b339-4e6f-a690-70565410014f-kube-api-access-qx6qs\") pod \"auto-csr-approver-29537548-dkvwp\" (UID: \"264bd128-b339-4e6f-a690-70565410014f\") " pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.354482 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6qs\" (UniqueName: \"kubernetes.io/projected/264bd128-b339-4e6f-a690-70565410014f-kube-api-access-qx6qs\") pod \"auto-csr-approver-29537548-dkvwp\" (UID: \"264bd128-b339-4e6f-a690-70565410014f\") " pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.385896 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6qs\" (UniqueName: \"kubernetes.io/projected/264bd128-b339-4e6f-a690-70565410014f-kube-api-access-qx6qs\") pod \"auto-csr-approver-29537548-dkvwp\" (UID: \"264bd128-b339-4e6f-a690-70565410014f\") " pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:00 crc kubenswrapper[4624]: I0228 04:28:00.534241 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:01 crc kubenswrapper[4624]: I0228 04:28:01.133863 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537548-dkvwp"] Feb 28 04:28:01 crc kubenswrapper[4624]: I0228 04:28:01.146788 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:28:01 crc kubenswrapper[4624]: I0228 04:28:01.824529 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" event={"ID":"264bd128-b339-4e6f-a690-70565410014f","Type":"ContainerStarted","Data":"5977b90fd336098ac87dae021ba7c1f40ed224092d27c65e481b61d5be654034"} Feb 28 04:28:02 crc kubenswrapper[4624]: I0228 04:28:02.834472 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" event={"ID":"264bd128-b339-4e6f-a690-70565410014f","Type":"ContainerStarted","Data":"f834e427af34aef08248657b263f74859608e960e26cee4c35d7514af684a07d"} Feb 28 04:28:02 crc kubenswrapper[4624]: I0228 04:28:02.854530 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" podStartSLOduration=1.646796228 podStartE2EDuration="2.854507502s" podCreationTimestamp="2026-02-28 04:28:00 +0000 UTC" firstStartedPulling="2026-02-28 04:28:01.146531514 +0000 UTC m=+3135.810570823" lastFinishedPulling="2026-02-28 04:28:02.354242788 +0000 UTC m=+3137.018282097" observedRunningTime="2026-02-28 04:28:02.85259918 +0000 UTC m=+3137.516638509" watchObservedRunningTime="2026-02-28 04:28:02.854507502 +0000 UTC m=+3137.518546821" Feb 28 04:28:03 crc kubenswrapper[4624]: I0228 04:28:03.847723 4624 generic.go:334] "Generic (PLEG): container finished" podID="264bd128-b339-4e6f-a690-70565410014f" containerID="f834e427af34aef08248657b263f74859608e960e26cee4c35d7514af684a07d" exitCode=0 Feb 28 04:28:03 crc kubenswrapper[4624]: I0228 04:28:03.847798 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" event={"ID":"264bd128-b339-4e6f-a690-70565410014f","Type":"ContainerDied","Data":"f834e427af34aef08248657b263f74859608e960e26cee4c35d7514af684a07d"} Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.087742 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:28:05 crc kubenswrapper[4624]: E0228 04:28:05.088501 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.299833 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.362718 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx6qs\" (UniqueName: \"kubernetes.io/projected/264bd128-b339-4e6f-a690-70565410014f-kube-api-access-qx6qs\") pod \"264bd128-b339-4e6f-a690-70565410014f\" (UID: \"264bd128-b339-4e6f-a690-70565410014f\") " Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.369185 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264bd128-b339-4e6f-a690-70565410014f-kube-api-access-qx6qs" (OuterVolumeSpecName: "kube-api-access-qx6qs") pod "264bd128-b339-4e6f-a690-70565410014f" (UID: "264bd128-b339-4e6f-a690-70565410014f"). InnerVolumeSpecName "kube-api-access-qx6qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.466128 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx6qs\" (UniqueName: \"kubernetes.io/projected/264bd128-b339-4e6f-a690-70565410014f-kube-api-access-qx6qs\") on node \"crc\" DevicePath \"\"" Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.871332 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" event={"ID":"264bd128-b339-4e6f-a690-70565410014f","Type":"ContainerDied","Data":"5977b90fd336098ac87dae021ba7c1f40ed224092d27c65e481b61d5be654034"} Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.871676 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5977b90fd336098ac87dae021ba7c1f40ed224092d27c65e481b61d5be654034" Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.871427 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537548-dkvwp" Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.950922 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537542-psh46"] Feb 28 04:28:05 crc kubenswrapper[4624]: I0228 04:28:05.950990 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537542-psh46"] Feb 28 04:28:06 crc kubenswrapper[4624]: I0228 04:28:06.111445 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a28442d-b6e0-4152-95e0-0032edfde9dc" path="/var/lib/kubelet/pods/8a28442d-b6e0-4152-95e0-0032edfde9dc/volumes" Feb 28 04:28:16 crc kubenswrapper[4624]: I0228 04:28:16.130166 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:28:16 crc kubenswrapper[4624]: E0228 04:28:16.131079 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:28:29 crc kubenswrapper[4624]: I0228 04:28:29.087896 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:28:29 crc kubenswrapper[4624]: E0228 04:28:29.089029 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:28:42 crc kubenswrapper[4624]: I0228 04:28:42.087600 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:28:42 crc kubenswrapper[4624]: E0228 04:28:42.088377 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:28:54 crc kubenswrapper[4624]: I0228 04:28:54.087832 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:28:54 crc kubenswrapper[4624]: E0228 04:28:54.088818 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:29:02 crc kubenswrapper[4624]: I0228 04:29:02.945411 4624 scope.go:117] "RemoveContainer" containerID="7c66289e3f8b86814e1b056c661fb406102436c3404b429710aba21243fd059d" Feb 28 04:29:07 crc kubenswrapper[4624]: I0228 04:29:07.087935 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:29:07 crc kubenswrapper[4624]: E0228 04:29:07.088672 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:29:20 crc kubenswrapper[4624]: I0228 04:29:20.087943 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:29:20 crc kubenswrapper[4624]: E0228 04:29:20.088693 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:29:32 crc kubenswrapper[4624]: I0228 04:29:32.087501 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:29:32 crc kubenswrapper[4624]: E0228 04:29:32.088553 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:29:45 crc kubenswrapper[4624]: I0228 04:29:45.088128 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:29:45 crc kubenswrapper[4624]: E0228 04:29:45.089076 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.094536 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:30:00 crc kubenswrapper[4624]: E0228 04:30:00.095477 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.141664 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537550-gm6zl"] Feb 28 04:30:00 crc kubenswrapper[4624]: E0228 04:30:00.142258 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264bd128-b339-4e6f-a690-70565410014f" containerName="oc" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.142280 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="264bd128-b339-4e6f-a690-70565410014f" containerName="oc" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.142557 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="264bd128-b339-4e6f-a690-70565410014f" containerName="oc" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.143390 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.151035 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.151604 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.151729 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.160792 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5"] Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.162120 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.167047 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.167412 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.175643 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537550-gm6zl"] Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.183588 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5"] Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.264991 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-config-volume\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.265358 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-secret-volume\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.265475 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhm7v\" (UniqueName: \"kubernetes.io/projected/dd2a6fe1-f278-439b-9f6a-3f72b9247d15-kube-api-access-zhm7v\") pod \"auto-csr-approver-29537550-gm6zl\" (UID: \"dd2a6fe1-f278-439b-9f6a-3f72b9247d15\") " pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.265575 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct75x\" (UniqueName: \"kubernetes.io/projected/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-kube-api-access-ct75x\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.367450 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-secret-volume\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.367516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhm7v\" (UniqueName: \"kubernetes.io/projected/dd2a6fe1-f278-439b-9f6a-3f72b9247d15-kube-api-access-zhm7v\") pod \"auto-csr-approver-29537550-gm6zl\" (UID: \"dd2a6fe1-f278-439b-9f6a-3f72b9247d15\") " pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.367546 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct75x\" (UniqueName: \"kubernetes.io/projected/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-kube-api-access-ct75x\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.367598 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-config-volume\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.368408 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-config-volume\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.377125 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-secret-volume\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.392055 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct75x\" (UniqueName: \"kubernetes.io/projected/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-kube-api-access-ct75x\") pod \"collect-profiles-29537550-22vr5\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.398517 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhm7v\" (UniqueName: \"kubernetes.io/projected/dd2a6fe1-f278-439b-9f6a-3f72b9247d15-kube-api-access-zhm7v\") pod \"auto-csr-approver-29537550-gm6zl\" (UID: \"dd2a6fe1-f278-439b-9f6a-3f72b9247d15\") " pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.459841 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.480049 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:00 crc kubenswrapper[4624]: I0228 04:30:00.933211 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537550-gm6zl"] Feb 28 04:30:01 crc kubenswrapper[4624]: I0228 04:30:01.014525 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" event={"ID":"dd2a6fe1-f278-439b-9f6a-3f72b9247d15","Type":"ContainerStarted","Data":"7f65f4b76448a6497d6478703d133aa16ae5fabd81e51bcd9453eb0325faabb0"} Feb 28 04:30:01 crc kubenswrapper[4624]: W0228 04:30:01.038383 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09af07c0_6cb4_4a52_bc1f_9dbd8f3535df.slice/crio-996c7e2c1be4d367ed63268a3ab78f39ec0d2d056e1fe3e0e44a5b7ef9e05217 WatchSource:0}: Error finding container 996c7e2c1be4d367ed63268a3ab78f39ec0d2d056e1fe3e0e44a5b7ef9e05217: Status 404 returned error can't find the container with id 996c7e2c1be4d367ed63268a3ab78f39ec0d2d056e1fe3e0e44a5b7ef9e05217 Feb 28 04:30:01 crc kubenswrapper[4624]: I0228 04:30:01.041898 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5"] Feb 28 04:30:02 crc kubenswrapper[4624]: I0228 04:30:02.028712 4624 generic.go:334] "Generic (PLEG): container finished" podID="09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" containerID="9aa4c3680d399d4985c70bcfcbab74be250c139c10566bf23993a546fb1e3f57" exitCode=0 Feb 28 04:30:02 crc kubenswrapper[4624]: I0228 04:30:02.028766 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" event={"ID":"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df","Type":"ContainerDied","Data":"9aa4c3680d399d4985c70bcfcbab74be250c139c10566bf23993a546fb1e3f57"} Feb 28 04:30:02 crc kubenswrapper[4624]: I0228 04:30:02.029336 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" event={"ID":"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df","Type":"ContainerStarted","Data":"996c7e2c1be4d367ed63268a3ab78f39ec0d2d056e1fe3e0e44a5b7ef9e05217"} Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.039416 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" event={"ID":"dd2a6fe1-f278-439b-9f6a-3f72b9247d15","Type":"ContainerStarted","Data":"ad1b7710e185408ebfc0ae02c0bf96cfa4285f30cbd667df5003de6853956e54"} Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.062273 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" podStartSLOduration=1.530482611 podStartE2EDuration="3.062257037s" podCreationTimestamp="2026-02-28 04:30:00 +0000 UTC" firstStartedPulling="2026-02-28 04:30:00.945938575 +0000 UTC m=+3255.609977884" lastFinishedPulling="2026-02-28 04:30:02.477712981 +0000 UTC m=+3257.141752310" observedRunningTime="2026-02-28 04:30:03.05862941 +0000 UTC m=+3257.722668719" watchObservedRunningTime="2026-02-28 04:30:03.062257037 +0000 UTC m=+3257.726296346" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.439415 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.542507 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-secret-volume\") pod \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.542884 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-config-volume\") pod \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.542998 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct75x\" (UniqueName: \"kubernetes.io/projected/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-kube-api-access-ct75x\") pod \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\" (UID: \"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df\") " Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.544149 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-config-volume" (OuterVolumeSpecName: "config-volume") pod "09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" (UID: "09af07c0-6cb4-4a52-bc1f-9dbd8f3535df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.549927 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-kube-api-access-ct75x" (OuterVolumeSpecName: "kube-api-access-ct75x") pod "09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" (UID: "09af07c0-6cb4-4a52-bc1f-9dbd8f3535df"). InnerVolumeSpecName "kube-api-access-ct75x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.562326 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" (UID: "09af07c0-6cb4-4a52-bc1f-9dbd8f3535df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.645812 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.645847 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct75x\" (UniqueName: \"kubernetes.io/projected/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-kube-api-access-ct75x\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:03 crc kubenswrapper[4624]: I0228 04:30:03.645858 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09af07c0-6cb4-4a52-bc1f-9dbd8f3535df-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.054807 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.054853 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-22vr5" event={"ID":"09af07c0-6cb4-4a52-bc1f-9dbd8f3535df","Type":"ContainerDied","Data":"996c7e2c1be4d367ed63268a3ab78f39ec0d2d056e1fe3e0e44a5b7ef9e05217"} Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.054901 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996c7e2c1be4d367ed63268a3ab78f39ec0d2d056e1fe3e0e44a5b7ef9e05217" Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.057478 4624 generic.go:334] "Generic (PLEG): container finished" podID="dd2a6fe1-f278-439b-9f6a-3f72b9247d15" containerID="ad1b7710e185408ebfc0ae02c0bf96cfa4285f30cbd667df5003de6853956e54" exitCode=0 Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.057530 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" event={"ID":"dd2a6fe1-f278-439b-9f6a-3f72b9247d15","Type":"ContainerDied","Data":"ad1b7710e185408ebfc0ae02c0bf96cfa4285f30cbd667df5003de6853956e54"} Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.508069 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g"] Feb 28 04:30:04 crc kubenswrapper[4624]: I0228 04:30:04.516165 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537505-lrm8g"] Feb 28 04:30:05 crc kubenswrapper[4624]: I0228 04:30:05.420575 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:05 crc kubenswrapper[4624]: I0228 04:30:05.476861 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhm7v\" (UniqueName: \"kubernetes.io/projected/dd2a6fe1-f278-439b-9f6a-3f72b9247d15-kube-api-access-zhm7v\") pod \"dd2a6fe1-f278-439b-9f6a-3f72b9247d15\" (UID: \"dd2a6fe1-f278-439b-9f6a-3f72b9247d15\") " Feb 28 04:30:05 crc kubenswrapper[4624]: I0228 04:30:05.483237 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2a6fe1-f278-439b-9f6a-3f72b9247d15-kube-api-access-zhm7v" (OuterVolumeSpecName: "kube-api-access-zhm7v") pod "dd2a6fe1-f278-439b-9f6a-3f72b9247d15" (UID: "dd2a6fe1-f278-439b-9f6a-3f72b9247d15"). InnerVolumeSpecName "kube-api-access-zhm7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:30:05 crc kubenswrapper[4624]: I0228 04:30:05.579291 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhm7v\" (UniqueName: \"kubernetes.io/projected/dd2a6fe1-f278-439b-9f6a-3f72b9247d15-kube-api-access-zhm7v\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:06 crc kubenswrapper[4624]: I0228 04:30:06.107223 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" Feb 28 04:30:06 crc kubenswrapper[4624]: I0228 04:30:06.117735 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9384484-a89a-487d-9cc3-327226cc1847" path="/var/lib/kubelet/pods/c9384484-a89a-487d-9cc3-327226cc1847/volumes" Feb 28 04:30:06 crc kubenswrapper[4624]: I0228 04:30:06.121285 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537550-gm6zl" event={"ID":"dd2a6fe1-f278-439b-9f6a-3f72b9247d15","Type":"ContainerDied","Data":"7f65f4b76448a6497d6478703d133aa16ae5fabd81e51bcd9453eb0325faabb0"} Feb 28 04:30:06 crc kubenswrapper[4624]: I0228 04:30:06.121329 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f65f4b76448a6497d6478703d133aa16ae5fabd81e51bcd9453eb0325faabb0" Feb 28 04:30:06 crc kubenswrapper[4624]: I0228 04:30:06.162832 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537544-sqhfj"] Feb 28 04:30:06 crc kubenswrapper[4624]: I0228 04:30:06.171565 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537544-sqhfj"] Feb 28 04:30:08 crc kubenswrapper[4624]: I0228 04:30:08.097331 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e328e122-3e66-465c-a05d-8fddc9b01fce" path="/var/lib/kubelet/pods/e328e122-3e66-465c-a05d-8fddc9b01fce/volumes" Feb 28 04:30:11 crc kubenswrapper[4624]: I0228 04:30:11.088227 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:30:11 crc kubenswrapper[4624]: E0228 04:30:11.089173 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:30:19 crc kubenswrapper[4624]: I0228 04:30:19.234982 4624 generic.go:334] "Generic (PLEG): container finished" podID="8687164b-ff55-49e1-ae97-79d38c05f861" containerID="adf970f6875eba47960d53bbb55c6e7a270950fa74eaf7ab32a4a539a2eb70cf" exitCode=0 Feb 28 04:30:19 crc kubenswrapper[4624]: I0228 04:30:19.235070 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8687164b-ff55-49e1-ae97-79d38c05f861","Type":"ContainerDied","Data":"adf970f6875eba47960d53bbb55c6e7a270950fa74eaf7ab32a4a539a2eb70cf"} Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.933429 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961260 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ca-certs\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961314 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-config-data\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961360 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-temporary\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961383 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961415 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c2th\" (UniqueName: \"kubernetes.io/projected/8687164b-ff55-49e1-ae97-79d38c05f861-kube-api-access-6c2th\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961438 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961478 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ssh-key\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961507 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config-secret\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.961544 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-workdir\") pod \"8687164b-ff55-49e1-ae97-79d38c05f861\" (UID: \"8687164b-ff55-49e1-ae97-79d38c05f861\") " Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.962596 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-config-data" (OuterVolumeSpecName: "config-data") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.965329 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.987665 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8687164b-ff55-49e1-ae97-79d38c05f861-kube-api-access-6c2th" (OuterVolumeSpecName: "kube-api-access-6c2th") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "kube-api-access-6c2th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:30:20 crc kubenswrapper[4624]: I0228 04:30:20.988311 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.028150 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.028533 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.035920 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.051955 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067401 4624 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067550 4624 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067733 4624 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067781 4624 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067794 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c2th\" (UniqueName: \"kubernetes.io/projected/8687164b-ff55-49e1-ae97-79d38c05f861-kube-api-access-6c2th\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067805 4624 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067815 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.067824 4624 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8687164b-ff55-49e1-ae97-79d38c05f861-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.089098 4624 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.095726 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8687164b-ff55-49e1-ae97-79d38c05f861" (UID: "8687164b-ff55-49e1-ae97-79d38c05f861"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.171190 4624 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8687164b-ff55-49e1-ae97-79d38c05f861-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.171240 4624 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.260527 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8687164b-ff55-49e1-ae97-79d38c05f861","Type":"ContainerDied","Data":"b90ced3c5ee418b31a50043fcac48bff6458d84d1ecb1834bc057f818b6c2420"} Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.260587 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90ced3c5ee418b31a50043fcac48bff6458d84d1ecb1834bc057f818b6c2420" Feb 28 04:30:21 crc kubenswrapper[4624]: I0228 04:30:21.260984 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.971558 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 04:30:23 crc kubenswrapper[4624]: E0228 04:30:23.985816 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a6fe1-f278-439b-9f6a-3f72b9247d15" containerName="oc" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.986067 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a6fe1-f278-439b-9f6a-3f72b9247d15" containerName="oc" Feb 28 04:30:23 crc kubenswrapper[4624]: E0228 04:30:23.986231 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8687164b-ff55-49e1-ae97-79d38c05f861" containerName="tempest-tests-tempest-tests-runner" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.986291 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="8687164b-ff55-49e1-ae97-79d38c05f861" containerName="tempest-tests-tempest-tests-runner" Feb 28 04:30:23 crc kubenswrapper[4624]: E0228 04:30:23.986362 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" containerName="collect-profiles" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.986422 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" containerName="collect-profiles" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.986672 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a6fe1-f278-439b-9f6a-3f72b9247d15" containerName="oc" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.986738 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="09af07c0-6cb4-4a52-bc1f-9dbd8f3535df" containerName="collect-profiles" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.986798 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="8687164b-ff55-49e1-ae97-79d38c05f861" containerName="tempest-tests-tempest-tests-runner" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.987351 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.987529 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:23 crc kubenswrapper[4624]: I0228 04:30:23.990031 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k5ddr" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.139833 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2ts\" (UniqueName: \"kubernetes.io/projected/b5bf464d-f307-4f8b-be8c-cfe363cc6daa-kube-api-access-zj2ts\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.140002 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.241572 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.241716 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2ts\" (UniqueName: \"kubernetes.io/projected/b5bf464d-f307-4f8b-be8c-cfe363cc6daa-kube-api-access-zj2ts\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.242123 4624 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.265360 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2ts\" (UniqueName: \"kubernetes.io/projected/b5bf464d-f307-4f8b-be8c-cfe363cc6daa-kube-api-access-zj2ts\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.268481 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b5bf464d-f307-4f8b-be8c-cfe363cc6daa\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.335924 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 04:30:24 crc kubenswrapper[4624]: I0228 04:30:24.843621 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 04:30:25 crc kubenswrapper[4624]: I0228 04:30:25.309861 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b5bf464d-f307-4f8b-be8c-cfe363cc6daa","Type":"ContainerStarted","Data":"2c7bf1e7cbcf7f10bffe60319cab4d4219a3bd20cbf881e3944775b62e44503e"} Feb 28 04:30:26 crc kubenswrapper[4624]: I0228 04:30:26.087438 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:30:26 crc kubenswrapper[4624]: I0228 04:30:26.326641 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"06e581f0888923edc4bc489a70ad03e776ef8104d0b2056afc183dca47bf121f"} Feb 28 04:30:26 crc kubenswrapper[4624]: I0228 04:30:26.329685 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b5bf464d-f307-4f8b-be8c-cfe363cc6daa","Type":"ContainerStarted","Data":"2a98da539a0e9068bb81efa41e574416719f2179528e216bbf1ea573c19b7cba"} Feb 28 04:30:26 crc kubenswrapper[4624]: I0228 04:30:26.383549 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.403435981 podStartE2EDuration="3.383517006s" podCreationTimestamp="2026-02-28 04:30:23 +0000 UTC" firstStartedPulling="2026-02-28 04:30:24.867358202 +0000 UTC m=+3279.531397501" lastFinishedPulling="2026-02-28 04:30:25.847439217 +0000 UTC m=+3280.511478526" observedRunningTime="2026-02-28 04:30:26.371715727 +0000 UTC m=+3281.035755046" watchObservedRunningTime="2026-02-28 04:30:26.383517006 +0000 UTC m=+3281.047556345" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.521204 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v6hw7/must-gather-rz4fg"] Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.537216 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.544727 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v6hw7"/"kube-root-ca.crt" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.571249 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v6hw7"/"openshift-service-ca.crt" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.577648 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v6hw7/must-gather-rz4fg"] Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.618986 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j67n\" (UniqueName: \"kubernetes.io/projected/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-kube-api-access-4j67n\") pod \"must-gather-rz4fg\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.619068 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-must-gather-output\") pod \"must-gather-rz4fg\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.721322 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j67n\" (UniqueName: \"kubernetes.io/projected/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-kube-api-access-4j67n\") pod \"must-gather-rz4fg\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.721387 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-must-gather-output\") pod \"must-gather-rz4fg\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.721795 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-must-gather-output\") pod \"must-gather-rz4fg\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.755738 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j67n\" (UniqueName: \"kubernetes.io/projected/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-kube-api-access-4j67n\") pod \"must-gather-rz4fg\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:47 crc kubenswrapper[4624]: I0228 04:30:47.893816 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:30:48 crc kubenswrapper[4624]: I0228 04:30:48.368035 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v6hw7/must-gather-rz4fg"] Feb 28 04:30:48 crc kubenswrapper[4624]: I0228 04:30:48.599306 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" event={"ID":"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c","Type":"ContainerStarted","Data":"cee2f6776f16e554ce2ca7a58509bddeb5ba453fe561a4f14086b9cb353f6b24"} Feb 28 04:30:57 crc kubenswrapper[4624]: I0228 04:30:57.701679 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" event={"ID":"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c","Type":"ContainerStarted","Data":"dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853"} Feb 28 04:30:57 crc kubenswrapper[4624]: I0228 04:30:57.702194 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" event={"ID":"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c","Type":"ContainerStarted","Data":"dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423"} Feb 28 04:30:57 crc kubenswrapper[4624]: I0228 04:30:57.724157 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" podStartSLOduration=2.147093935 podStartE2EDuration="10.724141569s" podCreationTimestamp="2026-02-28 04:30:47 +0000 UTC" firstStartedPulling="2026-02-28 04:30:48.379326706 +0000 UTC m=+3303.043366015" lastFinishedPulling="2026-02-28 04:30:56.95637434 +0000 UTC m=+3311.620413649" observedRunningTime="2026-02-28 04:30:57.71937632 +0000 UTC m=+3312.383415629" watchObservedRunningTime="2026-02-28 04:30:57.724141569 +0000 UTC m=+3312.388180878" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.039660 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-ft2w7"] Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.041374 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.044175 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v6hw7"/"default-dockercfg-hzwg2" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.078579 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-host\") pod \"crc-debug-ft2w7\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.078651 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4khg\" (UniqueName: \"kubernetes.io/projected/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-kube-api-access-p4khg\") pod \"crc-debug-ft2w7\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.180971 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-host\") pod \"crc-debug-ft2w7\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.181050 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4khg\" (UniqueName: \"kubernetes.io/projected/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-kube-api-access-p4khg\") pod \"crc-debug-ft2w7\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.181386 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-host\") pod \"crc-debug-ft2w7\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.202933 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4khg\" (UniqueName: \"kubernetes.io/projected/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-kube-api-access-p4khg\") pod \"crc-debug-ft2w7\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.360257 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:02 crc kubenswrapper[4624]: W0228 04:31:02.411257 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ed4d3b_3677_4c6a_a605_2a2ce60d22e2.slice/crio-0831be0a9cde6658116401c1637ba5a576d0f2deb7cb0172e61ae1f9f4d418a2 WatchSource:0}: Error finding container 0831be0a9cde6658116401c1637ba5a576d0f2deb7cb0172e61ae1f9f4d418a2: Status 404 returned error can't find the container with id 0831be0a9cde6658116401c1637ba5a576d0f2deb7cb0172e61ae1f9f4d418a2 Feb 28 04:31:02 crc kubenswrapper[4624]: I0228 04:31:02.758710 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" event={"ID":"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2","Type":"ContainerStarted","Data":"0831be0a9cde6658116401c1637ba5a576d0f2deb7cb0172e61ae1f9f4d418a2"} Feb 28 04:31:03 crc kubenswrapper[4624]: I0228 04:31:03.109263 4624 scope.go:117] "RemoveContainer" containerID="b9e1e23576f53adc11960b9a462d9264ea66e5857743a47e5b828e05d8c8613f" Feb 28 04:31:03 crc kubenswrapper[4624]: I0228 04:31:03.196348 4624 scope.go:117] "RemoveContainer" containerID="bab4fd0b0c41cad86b5a833601d69e979905ca0903ecd9a9b312391960ea1748" Feb 28 04:31:16 crc kubenswrapper[4624]: I0228 04:31:16.907199 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" event={"ID":"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2","Type":"ContainerStarted","Data":"5d708ac2062b525cf5c210286792ffed78243d42996d5de2b6298456a2366d83"} Feb 28 04:31:16 crc kubenswrapper[4624]: I0228 04:31:16.927078 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" podStartSLOduration=1.705392037 podStartE2EDuration="14.927058456s" podCreationTimestamp="2026-02-28 04:31:02 +0000 UTC" firstStartedPulling="2026-02-28 04:31:02.414118388 +0000 UTC m=+3317.078157687" lastFinishedPulling="2026-02-28 04:31:15.635784797 +0000 UTC m=+3330.299824106" observedRunningTime="2026-02-28 04:31:16.920785296 +0000 UTC m=+3331.584824605" watchObservedRunningTime="2026-02-28 04:31:16.927058456 +0000 UTC m=+3331.591097765" Feb 28 04:31:57 crc kubenswrapper[4624]: I0228 04:31:57.352388 4624 generic.go:334] "Generic (PLEG): container finished" podID="14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" containerID="5d708ac2062b525cf5c210286792ffed78243d42996d5de2b6298456a2366d83" exitCode=0 Feb 28 04:31:57 crc kubenswrapper[4624]: I0228 04:31:57.352512 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" event={"ID":"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2","Type":"ContainerDied","Data":"5d708ac2062b525cf5c210286792ffed78243d42996d5de2b6298456a2366d83"} Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.453351 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.506950 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-ft2w7"] Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.520997 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-ft2w7"] Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.596641 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-host\") pod \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.596739 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-host" (OuterVolumeSpecName: "host") pod "14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" (UID: "14ed4d3b-3677-4c6a-a605-2a2ce60d22e2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.596846 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4khg\" (UniqueName: \"kubernetes.io/projected/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-kube-api-access-p4khg\") pod \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\" (UID: \"14ed4d3b-3677-4c6a-a605-2a2ce60d22e2\") " Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.597683 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-host\") on node \"crc\" DevicePath \"\"" Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.609317 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-kube-api-access-p4khg" (OuterVolumeSpecName: "kube-api-access-p4khg") pod "14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" (UID: "14ed4d3b-3677-4c6a-a605-2a2ce60d22e2"). InnerVolumeSpecName "kube-api-access-p4khg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:31:58 crc kubenswrapper[4624]: I0228 04:31:58.700181 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4khg\" (UniqueName: \"kubernetes.io/projected/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2-kube-api-access-p4khg\") on node \"crc\" DevicePath \"\"" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.369540 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0831be0a9cde6658116401c1637ba5a576d0f2deb7cb0172e61ae1f9f4d418a2" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.369604 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-ft2w7" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.746630 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-lnnvn"] Feb 28 04:31:59 crc kubenswrapper[4624]: E0228 04:31:59.747062 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" containerName="container-00" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.747076 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" containerName="container-00" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.747318 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" containerName="container-00" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.748667 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.751505 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v6hw7"/"default-dockercfg-hzwg2" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.822891 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xt9\" (UniqueName: \"kubernetes.io/projected/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-kube-api-access-29xt9\") pod \"crc-debug-lnnvn\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.823012 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-host\") pod \"crc-debug-lnnvn\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.925228 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xt9\" (UniqueName: \"kubernetes.io/projected/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-kube-api-access-29xt9\") pod \"crc-debug-lnnvn\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.925387 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-host\") pod \"crc-debug-lnnvn\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.925641 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-host\") pod \"crc-debug-lnnvn\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:31:59 crc kubenswrapper[4624]: I0228 04:31:59.963130 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xt9\" (UniqueName: \"kubernetes.io/projected/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-kube-api-access-29xt9\") pod \"crc-debug-lnnvn\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.070263 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.106050 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ed4d3b-3677-4c6a-a605-2a2ce60d22e2" path="/var/lib/kubelet/pods/14ed4d3b-3677-4c6a-a605-2a2ce60d22e2/volumes" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.159867 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537552-8n9x7"] Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.162727 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.169388 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.169840 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.169942 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.181075 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537552-8n9x7"] Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.236551 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflnv\" (UniqueName: \"kubernetes.io/projected/420d340a-b4a1-4b60-9268-24770e57adb1-kube-api-access-hflnv\") pod \"auto-csr-approver-29537552-8n9x7\" (UID: \"420d340a-b4a1-4b60-9268-24770e57adb1\") " pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.338948 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflnv\" (UniqueName: \"kubernetes.io/projected/420d340a-b4a1-4b60-9268-24770e57adb1-kube-api-access-hflnv\") pod \"auto-csr-approver-29537552-8n9x7\" (UID: \"420d340a-b4a1-4b60-9268-24770e57adb1\") " pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.369136 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflnv\" (UniqueName: \"kubernetes.io/projected/420d340a-b4a1-4b60-9268-24770e57adb1-kube-api-access-hflnv\") pod \"auto-csr-approver-29537552-8n9x7\" (UID: \"420d340a-b4a1-4b60-9268-24770e57adb1\") " pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.384199 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" event={"ID":"fc3cb5fc-df50-44ef-a3a2-4c46a893266c","Type":"ContainerStarted","Data":"3db8626ce96ec0a1d41f14407cb40deb34af87e7f385716eeb7cb32819f334d9"} Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.384277 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" event={"ID":"fc3cb5fc-df50-44ef-a3a2-4c46a893266c","Type":"ContainerStarted","Data":"d03d1e10dcf8175147c740ac02416c6b1ea126fac13d8a17dd4f3ec5f973b290"} Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.401704 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" podStartSLOduration=1.401685769 podStartE2EDuration="1.401685769s" podCreationTimestamp="2026-02-28 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:32:00.398010199 +0000 UTC m=+3375.062049508" watchObservedRunningTime="2026-02-28 04:32:00.401685769 +0000 UTC m=+3375.065725078" Feb 28 04:32:00 crc kubenswrapper[4624]: I0228 04:32:00.592822 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:01 crc kubenswrapper[4624]: I0228 04:32:01.282547 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537552-8n9x7"] Feb 28 04:32:01 crc kubenswrapper[4624]: I0228 04:32:01.395601 4624 generic.go:334] "Generic (PLEG): container finished" podID="fc3cb5fc-df50-44ef-a3a2-4c46a893266c" containerID="3db8626ce96ec0a1d41f14407cb40deb34af87e7f385716eeb7cb32819f334d9" exitCode=0 Feb 28 04:32:01 crc kubenswrapper[4624]: I0228 04:32:01.395735 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" event={"ID":"fc3cb5fc-df50-44ef-a3a2-4c46a893266c","Type":"ContainerDied","Data":"3db8626ce96ec0a1d41f14407cb40deb34af87e7f385716eeb7cb32819f334d9"} Feb 28 04:32:01 crc kubenswrapper[4624]: I0228 04:32:01.398227 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" event={"ID":"420d340a-b4a1-4b60-9268-24770e57adb1","Type":"ContainerStarted","Data":"279314d59dafcc79aa3620b0807206111bc6f39e34cb27cb6405dc5a5f742b73"} Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.525913 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.559642 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-lnnvn"] Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.568995 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-lnnvn"] Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.589001 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-host\") pod \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.589152 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-host" (OuterVolumeSpecName: "host") pod "fc3cb5fc-df50-44ef-a3a2-4c46a893266c" (UID: "fc3cb5fc-df50-44ef-a3a2-4c46a893266c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.589366 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29xt9\" (UniqueName: \"kubernetes.io/projected/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-kube-api-access-29xt9\") pod \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\" (UID: \"fc3cb5fc-df50-44ef-a3a2-4c46a893266c\") " Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.590136 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-host\") on node \"crc\" DevicePath \"\"" Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.599013 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-kube-api-access-29xt9" (OuterVolumeSpecName: "kube-api-access-29xt9") pod "fc3cb5fc-df50-44ef-a3a2-4c46a893266c" (UID: "fc3cb5fc-df50-44ef-a3a2-4c46a893266c"). InnerVolumeSpecName "kube-api-access-29xt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:32:02 crc kubenswrapper[4624]: I0228 04:32:02.691155 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29xt9\" (UniqueName: \"kubernetes.io/projected/fc3cb5fc-df50-44ef-a3a2-4c46a893266c-kube-api-access-29xt9\") on node \"crc\" DevicePath \"\"" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.421350 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-lnnvn" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.424156 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d03d1e10dcf8175147c740ac02416c6b1ea126fac13d8a17dd4f3ec5f973b290" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.433012 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" event={"ID":"420d340a-b4a1-4b60-9268-24770e57adb1","Type":"ContainerStarted","Data":"3bc2b8d19dc04509aee4a70184dcc927359a21533ecee5ecb771b6846daaa737"} Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.757178 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-vd7kx"] Feb 28 04:32:03 crc kubenswrapper[4624]: E0228 04:32:03.757812 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3cb5fc-df50-44ef-a3a2-4c46a893266c" containerName="container-00" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.757825 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3cb5fc-df50-44ef-a3a2-4c46a893266c" containerName="container-00" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.758017 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3cb5fc-df50-44ef-a3a2-4c46a893266c" containerName="container-00" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.758591 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.760991 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v6hw7"/"default-dockercfg-hzwg2" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.818801 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/759ba53d-c134-447c-87b0-f0907a36e5dd-host\") pod \"crc-debug-vd7kx\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.819300 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87mj\" (UniqueName: \"kubernetes.io/projected/759ba53d-c134-447c-87b0-f0907a36e5dd-kube-api-access-w87mj\") pod \"crc-debug-vd7kx\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.923103 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w87mj\" (UniqueName: \"kubernetes.io/projected/759ba53d-c134-447c-87b0-f0907a36e5dd-kube-api-access-w87mj\") pod \"crc-debug-vd7kx\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.923191 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/759ba53d-c134-447c-87b0-f0907a36e5dd-host\") pod \"crc-debug-vd7kx\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.923326 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/759ba53d-c134-447c-87b0-f0907a36e5dd-host\") pod \"crc-debug-vd7kx\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:03 crc kubenswrapper[4624]: I0228 04:32:03.960177 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87mj\" (UniqueName: \"kubernetes.io/projected/759ba53d-c134-447c-87b0-f0907a36e5dd-kube-api-access-w87mj\") pod \"crc-debug-vd7kx\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.075558 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.100673 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3cb5fc-df50-44ef-a3a2-4c46a893266c" path="/var/lib/kubelet/pods/fc3cb5fc-df50-44ef-a3a2-4c46a893266c/volumes" Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.446504 4624 generic.go:334] "Generic (PLEG): container finished" podID="420d340a-b4a1-4b60-9268-24770e57adb1" containerID="3bc2b8d19dc04509aee4a70184dcc927359a21533ecee5ecb771b6846daaa737" exitCode=0 Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.446550 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" event={"ID":"420d340a-b4a1-4b60-9268-24770e57adb1","Type":"ContainerDied","Data":"3bc2b8d19dc04509aee4a70184dcc927359a21533ecee5ecb771b6846daaa737"} Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.448688 4624 generic.go:334] "Generic (PLEG): container finished" podID="759ba53d-c134-447c-87b0-f0907a36e5dd" containerID="a3f5e943aeca486382c5926fb077cbb07332ba7aac78cf94c5b559de22ae6b93" exitCode=0 Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.448721 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" event={"ID":"759ba53d-c134-447c-87b0-f0907a36e5dd","Type":"ContainerDied","Data":"a3f5e943aeca486382c5926fb077cbb07332ba7aac78cf94c5b559de22ae6b93"} Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.448742 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" event={"ID":"759ba53d-c134-447c-87b0-f0907a36e5dd","Type":"ContainerStarted","Data":"d884bb3f198a0d40609ee1681cb25ce2881a4a10c5715569f60f8a03c6814c92"} Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.504172 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-vd7kx"] Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.514202 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v6hw7/crc-debug-vd7kx"] Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.796900 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.942113 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hflnv\" (UniqueName: \"kubernetes.io/projected/420d340a-b4a1-4b60-9268-24770e57adb1-kube-api-access-hflnv\") pod \"420d340a-b4a1-4b60-9268-24770e57adb1\" (UID: \"420d340a-b4a1-4b60-9268-24770e57adb1\") " Feb 28 04:32:04 crc kubenswrapper[4624]: I0228 04:32:04.970017 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420d340a-b4a1-4b60-9268-24770e57adb1-kube-api-access-hflnv" (OuterVolumeSpecName: "kube-api-access-hflnv") pod "420d340a-b4a1-4b60-9268-24770e57adb1" (UID: "420d340a-b4a1-4b60-9268-24770e57adb1"). InnerVolumeSpecName "kube-api-access-hflnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.044653 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hflnv\" (UniqueName: \"kubernetes.io/projected/420d340a-b4a1-4b60-9268-24770e57adb1-kube-api-access-hflnv\") on node \"crc\" DevicePath \"\"" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.462211 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.462231 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537552-8n9x7" event={"ID":"420d340a-b4a1-4b60-9268-24770e57adb1","Type":"ContainerDied","Data":"279314d59dafcc79aa3620b0807206111bc6f39e34cb27cb6405dc5a5f742b73"} Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.462301 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279314d59dafcc79aa3620b0807206111bc6f39e34cb27cb6405dc5a5f742b73" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.593595 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.759751 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w87mj\" (UniqueName: \"kubernetes.io/projected/759ba53d-c134-447c-87b0-f0907a36e5dd-kube-api-access-w87mj\") pod \"759ba53d-c134-447c-87b0-f0907a36e5dd\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.760296 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/759ba53d-c134-447c-87b0-f0907a36e5dd-host\") pod \"759ba53d-c134-447c-87b0-f0907a36e5dd\" (UID: \"759ba53d-c134-447c-87b0-f0907a36e5dd\") " Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.761029 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/759ba53d-c134-447c-87b0-f0907a36e5dd-host" (OuterVolumeSpecName: "host") pod "759ba53d-c134-447c-87b0-f0907a36e5dd" (UID: "759ba53d-c134-447c-87b0-f0907a36e5dd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.782625 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759ba53d-c134-447c-87b0-f0907a36e5dd-kube-api-access-w87mj" (OuterVolumeSpecName: "kube-api-access-w87mj") pod "759ba53d-c134-447c-87b0-f0907a36e5dd" (UID: "759ba53d-c134-447c-87b0-f0907a36e5dd"). InnerVolumeSpecName "kube-api-access-w87mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.863937 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/759ba53d-c134-447c-87b0-f0907a36e5dd-host\") on node \"crc\" DevicePath \"\"" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.864022 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w87mj\" (UniqueName: \"kubernetes.io/projected/759ba53d-c134-447c-87b0-f0907a36e5dd-kube-api-access-w87mj\") on node \"crc\" DevicePath \"\"" Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.883652 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537546-6bb9s"] Feb 28 04:32:05 crc kubenswrapper[4624]: I0228 04:32:05.892728 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537546-6bb9s"] Feb 28 04:32:06 crc kubenswrapper[4624]: I0228 04:32:06.103363 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759ba53d-c134-447c-87b0-f0907a36e5dd" path="/var/lib/kubelet/pods/759ba53d-c134-447c-87b0-f0907a36e5dd/volumes" Feb 28 04:32:06 crc kubenswrapper[4624]: I0228 04:32:06.103937 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c" path="/var/lib/kubelet/pods/e3308cd5-0ea1-40c6-b2ef-b5b1e379af2c/volumes" Feb 28 04:32:06 crc kubenswrapper[4624]: I0228 04:32:06.475180 4624 scope.go:117] "RemoveContainer" containerID="a3f5e943aeca486382c5926fb077cbb07332ba7aac78cf94c5b559de22ae6b93" Feb 28 04:32:06 crc kubenswrapper[4624]: I0228 04:32:06.475275 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/crc-debug-vd7kx" Feb 28 04:32:33 crc kubenswrapper[4624]: I0228 04:32:33.167766 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-697469cdb8-v44r2_413c221c-acb0-4f2d-9621-b5bd0cdc14a5/barbican-api/0.log" Feb 28 04:32:33 crc kubenswrapper[4624]: I0228 04:32:33.474102 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bc94fcbd6-4dndd_5fbb7219-e74f-4adf-bf31-31794a503f07/barbican-keystone-listener/0.log" Feb 28 04:32:33 crc kubenswrapper[4624]: I0228 04:32:33.660968 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bc94fcbd6-4dndd_5fbb7219-e74f-4adf-bf31-31794a503f07/barbican-keystone-listener-log/0.log" Feb 28 04:32:33 crc kubenswrapper[4624]: I0228 04:32:33.911434 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c4588546c-gkrmm_3189b6cc-a911-48f2-aff9-f41b3313d38a/barbican-worker/0.log" Feb 28 04:32:33 crc kubenswrapper[4624]: I0228 04:32:33.982226 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-697469cdb8-v44r2_413c221c-acb0-4f2d-9621-b5bd0cdc14a5/barbican-api-log/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.044856 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c4588546c-gkrmm_3189b6cc-a911-48f2-aff9-f41b3313d38a/barbican-worker-log/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.157209 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86_b81c936c-7c68-4155-bee6-b4fab7bc44e8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.248536 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/ceilometer-central-agent/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.296058 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/ceilometer-notification-agent/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.378643 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/proxy-httpd/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.427361 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/sg-core/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.562229 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1343dbdf-afca-44d9-a8b3-828c71fe25a1/cinder-api/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.648586 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1343dbdf-afca-44d9-a8b3-828c71fe25a1/cinder-api-log/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.734104 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d77d2859-7ba3-4a5a-b2e2-536e824afade/cinder-scheduler/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.871058 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d77d2859-7ba3-4a5a-b2e2-536e824afade/probe/0.log" Feb 28 04:32:34 crc kubenswrapper[4624]: I0228 04:32:34.894579 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb_88fcba71-7eeb-4780-88f3-3d751230eb2a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.150453 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-gghhz_a2c9b638-8f30-49a3-a818-05bc76a99b30/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.188193 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-zth9s_4b13e83c-72f7-4925-abc6-1e284917cb66/init/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.426050 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jnldp_23fb1205-74ef-497d-bbd0-10fff39c6a4a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.447302 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-zth9s_4b13e83c-72f7-4925-abc6-1e284917cb66/init/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.468724 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-zth9s_4b13e83c-72f7-4925-abc6-1e284917cb66/dnsmasq-dns/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.653069 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_593dc11b-6b54-49d4-b9d9-c233b6ecd3ca/glance-httpd/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.671689 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_593dc11b-6b54-49d4-b9d9-c233b6ecd3ca/glance-log/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.844717 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a94cc3e1-53ad-429f-b778-ae8941ba8085/glance-httpd/0.log" Feb 28 04:32:35 crc kubenswrapper[4624]: I0228 04:32:35.921077 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a94cc3e1-53ad-429f-b778-ae8941ba8085/glance-log/0.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.228531 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988c5cd-svksm_6ccc2a9a-c3cc-4ddb-a700-86713957337e/horizon/2.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.279583 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988c5cd-svksm_6ccc2a9a-c3cc-4ddb-a700-86713957337e/horizon/3.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.448564 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988c5cd-svksm_6ccc2a9a-c3cc-4ddb-a700-86713957337e/horizon-log/0.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.500989 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-67png_b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.565608 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c9bfg_9d24e266-6648-42ed-a44e-0b37c5e974a0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.852305 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537521-9tntm_95510d44-6b29-45b3-b0a0-4a6ad761fa4e/keystone-cron/0.log" Feb 28 04:32:36 crc kubenswrapper[4624]: I0228 04:32:36.898814 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f48865754-rdngs_3eeb3ef4-037f-4755-a2d3-46df6804b116/keystone-api/0.log" Feb 28 04:32:37 crc kubenswrapper[4624]: I0228 04:32:37.361747 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_60d209e2-524d-40ba-b092-14b4f73dfb71/kube-state-metrics/0.log" Feb 28 04:32:37 crc kubenswrapper[4624]: I0228 04:32:37.452727 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-p42q4_9608e724-9bc7-4040-bfd3-29f159075de8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:37 crc kubenswrapper[4624]: I0228 04:32:37.719565 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4965c79c-gh5mv_c6aa9707-50ce-40f7-a741-9dcfea4b1f8e/neutron-httpd/0.log" Feb 28 04:32:37 crc kubenswrapper[4624]: I0228 04:32:37.829467 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4965c79c-gh5mv_c6aa9707-50ce-40f7-a741-9dcfea4b1f8e/neutron-api/0.log" Feb 28 04:32:37 crc kubenswrapper[4624]: I0228 04:32:37.992463 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89_bef97704-39c1-4a26-b58f-90b76510822c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:38 crc kubenswrapper[4624]: I0228 04:32:38.259745 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_78266014-e4d1-459b-b48f-a8b21a17cce3/nova-api-log/0.log" Feb 28 04:32:38 crc kubenswrapper[4624]: I0228 04:32:38.371540 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_78266014-e4d1-459b-b48f-a8b21a17cce3/nova-api-api/0.log" Feb 28 04:32:38 crc kubenswrapper[4624]: I0228 04:32:38.487313 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6d7f478f-8240-4fb8-8cfe-5b2e16c55b21/nova-cell0-conductor-conductor/0.log" Feb 28 04:32:38 crc kubenswrapper[4624]: I0228 04:32:38.718278 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cff7ee8c-629b-43aa-a39b-1b2282c58d2b/nova-cell1-conductor-conductor/0.log" Feb 28 04:32:38 crc kubenswrapper[4624]: I0228 04:32:38.886520 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4b31404c-f19e-465d-9acb-3a314299ad57/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 04:32:39 crc kubenswrapper[4624]: I0228 04:32:39.055708 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-fr4lw_15a08883-796d-49b7-a003-66cb6cc51189/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:39 crc kubenswrapper[4624]: I0228 04:32:39.385298 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_84e801c6-735b-4858-81d4-2dac7c9eba75/nova-metadata-log/0.log" Feb 28 04:32:39 crc kubenswrapper[4624]: I0228 04:32:39.504362 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6/nova-scheduler-scheduler/0.log" Feb 28 04:32:39 crc kubenswrapper[4624]: I0228 04:32:39.663281 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db8c8413-e456-4f82-9947-7d37578d237f/mysql-bootstrap/0.log" Feb 28 04:32:39 crc kubenswrapper[4624]: I0228 04:32:39.946424 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db8c8413-e456-4f82-9947-7d37578d237f/mysql-bootstrap/0.log" Feb 28 04:32:39 crc kubenswrapper[4624]: I0228 04:32:39.972518 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db8c8413-e456-4f82-9947-7d37578d237f/galera/0.log" Feb 28 04:32:40 crc kubenswrapper[4624]: I0228 04:32:40.241958 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c9c8d03c-80e2-42fc-a320-8175c10a59c4/mysql-bootstrap/0.log" Feb 28 04:32:40 crc kubenswrapper[4624]: I0228 04:32:40.415988 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_84e801c6-735b-4858-81d4-2dac7c9eba75/nova-metadata-metadata/0.log" Feb 28 04:32:40 crc kubenswrapper[4624]: I0228 04:32:40.471260 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c9c8d03c-80e2-42fc-a320-8175c10a59c4/mysql-bootstrap/0.log" Feb 28 04:32:40 crc kubenswrapper[4624]: I0228 04:32:40.533264 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c9c8d03c-80e2-42fc-a320-8175c10a59c4/galera/0.log" Feb 28 04:32:40 crc kubenswrapper[4624]: I0228 04:32:40.737990 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b9hfd_34bc3551-9974-4754-b285-e61f586a0b18/openstack-network-exporter/0.log" Feb 28 04:32:40 crc kubenswrapper[4624]: I0228 04:32:40.809458 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_fa3966ee-e42d-4dfe-a730-978481d7f497/openstackclient/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.051929 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovsdb-server-init/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.335720 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovsdb-server/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.378565 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovs-vswitchd/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.410747 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovsdb-server-init/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.684378 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-phft7_6da0269d-5fc3-487a-a49a-fa87c07af687/ovn-controller/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.766844 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ks9tq_e0224a59-2832-42cb-91f3-e0f12db48a81/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:41 crc kubenswrapper[4624]: I0228 04:32:41.995835 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5e11975f-5910-43a1-91ed-2633d3576fce/openstack-network-exporter/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.049582 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5e11975f-5910-43a1-91ed-2633d3576fce/ovn-northd/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.183812 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1463f48e-4ada-4214-b4cf-520088ae4fe4/openstack-network-exporter/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.433951 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c/ovsdbserver-sb/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.456003 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c/openstack-network-exporter/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.500675 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1463f48e-4ada-4214-b4cf-520088ae4fe4/ovsdbserver-nb/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.777494 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86b4894974-wxqfg_0391f882-2f7a-47e9-b4f2-b640e146e079/placement-api/0.log" Feb 28 04:32:42 crc kubenswrapper[4624]: I0228 04:32:42.829378 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86b4894974-wxqfg_0391f882-2f7a-47e9-b4f2-b640e146e079/placement-log/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.060775 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_00b8a2b6-f64c-452c-ac93-00422b339f64/setup-container/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.313060 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_00b8a2b6-f64c-452c-ac93-00422b339f64/rabbitmq/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.440364 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_03d202d9-cd01-4f0c-b7dc-9e89a7676c65/setup-container/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.452439 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_00b8a2b6-f64c-452c-ac93-00422b339f64/setup-container/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.823229 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6_0bcd1ce2-be32-4778-aced-701605c2cc28/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.854961 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_03d202d9-cd01-4f0c-b7dc-9e89a7676c65/setup-container/0.log" Feb 28 04:32:43 crc kubenswrapper[4624]: I0228 04:32:43.862948 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_03d202d9-cd01-4f0c-b7dc-9e89a7676c65/rabbitmq/0.log" Feb 28 04:32:44 crc kubenswrapper[4624]: I0228 04:32:44.228964 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p_435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:44 crc kubenswrapper[4624]: I0228 04:32:44.239879 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tb8rg_985adc96-94ed-4823-a477-f222def355a1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:44 crc kubenswrapper[4624]: I0228 04:32:44.585410 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nqpfh_7c73aa0b-4045-4181-849d-8e7a631cdb87/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:44 crc kubenswrapper[4624]: I0228 04:32:44.758175 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7qc65_077546b4-fddd-40c3-866a-714afa3a4f2f/ssh-known-hosts-edpm-deployment/0.log" Feb 28 04:32:44 crc kubenswrapper[4624]: I0228 04:32:44.942830 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-775c6bbdc-lvbk6_7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41/proxy-server/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.180111 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-775c6bbdc-lvbk6_7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41/proxy-httpd/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.332804 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gfd7z_41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33/swift-ring-rebalance/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.506362 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-auditor/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.581929 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-reaper/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.604649 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-server/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.683706 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-replicator/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.799029 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-auditor/0.log" Feb 28 04:32:45 crc kubenswrapper[4624]: I0228 04:32:45.935405 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-replicator/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.011060 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-server/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.014670 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-updater/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.122843 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-auditor/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.233517 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-expirer/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.293870 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-replicator/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.355270 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-server/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.485784 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-updater/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.555919 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/rsync/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.621141 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/swift-recon-cron/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.889312 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq_2588d2da-daa4-4eb7-b706-25290e0840c7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:46 crc kubenswrapper[4624]: I0228 04:32:46.990309 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8687164b-ff55-49e1-ae97-79d38c05f861/tempest-tests-tempest-tests-runner/0.log" Feb 28 04:32:47 crc kubenswrapper[4624]: I0228 04:32:47.182685 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b5bf464d-f307-4f8b-be8c-cfe363cc6daa/test-operator-logs-container/0.log" Feb 28 04:32:47 crc kubenswrapper[4624]: I0228 04:32:47.320572 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-92tp2_b801953f-c310-4623-ad3e-69dc84bc9a34/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:32:49 crc kubenswrapper[4624]: I0228 04:32:49.539299 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:32:49 crc kubenswrapper[4624]: I0228 04:32:49.539566 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.198910 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q25ld"] Feb 28 04:32:50 crc kubenswrapper[4624]: E0228 04:32:50.199346 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420d340a-b4a1-4b60-9268-24770e57adb1" containerName="oc" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.199364 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="420d340a-b4a1-4b60-9268-24770e57adb1" containerName="oc" Feb 28 04:32:50 crc kubenswrapper[4624]: E0228 04:32:50.199398 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759ba53d-c134-447c-87b0-f0907a36e5dd" containerName="container-00" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.199405 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="759ba53d-c134-447c-87b0-f0907a36e5dd" containerName="container-00" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.199573 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="759ba53d-c134-447c-87b0-f0907a36e5dd" containerName="container-00" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.199599 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="420d340a-b4a1-4b60-9268-24770e57adb1" containerName="oc" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.201288 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.228116 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q25ld"] Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.294195 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-utilities\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.294276 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-catalog-content\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.294334 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88lr\" (UniqueName: \"kubernetes.io/projected/43e38251-7447-446f-950e-c3ea42d758c1-kube-api-access-g88lr\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.395606 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-utilities\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.395681 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-catalog-content\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.395732 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88lr\" (UniqueName: \"kubernetes.io/projected/43e38251-7447-446f-950e-c3ea42d758c1-kube-api-access-g88lr\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.396437 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-utilities\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.396504 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-catalog-content\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.415964 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88lr\" (UniqueName: \"kubernetes.io/projected/43e38251-7447-446f-950e-c3ea42d758c1-kube-api-access-g88lr\") pod \"redhat-operators-q25ld\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:50 crc kubenswrapper[4624]: I0228 04:32:50.522390 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:32:51 crc kubenswrapper[4624]: I0228 04:32:51.296871 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q25ld"] Feb 28 04:32:52 crc kubenswrapper[4624]: I0228 04:32:52.277233 4624 generic.go:334] "Generic (PLEG): container finished" podID="43e38251-7447-446f-950e-c3ea42d758c1" containerID="8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5" exitCode=0 Feb 28 04:32:52 crc kubenswrapper[4624]: I0228 04:32:52.277540 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerDied","Data":"8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5"} Feb 28 04:32:52 crc kubenswrapper[4624]: I0228 04:32:52.277576 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerStarted","Data":"bbbe37bc377abb10e6ef3007754e28569749736ee78862f815d194da4405766b"} Feb 28 04:32:54 crc kubenswrapper[4624]: I0228 04:32:54.307925 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerStarted","Data":"182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e"} Feb 28 04:32:57 crc kubenswrapper[4624]: I0228 04:32:57.293651 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_81d248fe-a92f-469e-8283-3fd135198c65/memcached/0.log" Feb 28 04:32:59 crc kubenswrapper[4624]: I0228 04:32:59.355002 4624 generic.go:334] "Generic (PLEG): container finished" podID="43e38251-7447-446f-950e-c3ea42d758c1" containerID="182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e" exitCode=0 Feb 28 04:32:59 crc kubenswrapper[4624]: I0228 04:32:59.355102 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerDied","Data":"182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e"} Feb 28 04:33:00 crc kubenswrapper[4624]: I0228 04:33:00.370100 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerStarted","Data":"e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286"} Feb 28 04:33:00 crc kubenswrapper[4624]: I0228 04:33:00.402021 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q25ld" podStartSLOduration=2.9301651140000002 podStartE2EDuration="10.402001146s" podCreationTimestamp="2026-02-28 04:32:50 +0000 UTC" firstStartedPulling="2026-02-28 04:32:52.279766681 +0000 UTC m=+3426.943805990" lastFinishedPulling="2026-02-28 04:32:59.751602713 +0000 UTC m=+3434.415642022" observedRunningTime="2026-02-28 04:33:00.393240489 +0000 UTC m=+3435.057279798" watchObservedRunningTime="2026-02-28 04:33:00.402001146 +0000 UTC m=+3435.066040455" Feb 28 04:33:00 crc kubenswrapper[4624]: I0228 04:33:00.524136 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:33:00 crc kubenswrapper[4624]: I0228 04:33:00.524203 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:33:01 crc kubenswrapper[4624]: I0228 04:33:01.589662 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q25ld" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="registry-server" probeResult="failure" output=< Feb 28 04:33:01 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:33:01 crc kubenswrapper[4624]: > Feb 28 04:33:03 crc kubenswrapper[4624]: I0228 04:33:03.350960 4624 scope.go:117] "RemoveContainer" containerID="cafedfd73f71ed29499c1e42da6c1c544da86524d47f06f2ced614f314fffcad" Feb 28 04:33:11 crc kubenswrapper[4624]: I0228 04:33:11.578008 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q25ld" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="registry-server" probeResult="failure" output=< Feb 28 04:33:11 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:33:11 crc kubenswrapper[4624]: > Feb 28 04:33:12 crc kubenswrapper[4624]: I0228 04:33:12.842825 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrzwz"] Feb 28 04:33:12 crc kubenswrapper[4624]: I0228 04:33:12.844921 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:12 crc kubenswrapper[4624]: I0228 04:33:12.868869 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrzwz"] Feb 28 04:33:12 crc kubenswrapper[4624]: I0228 04:33:12.931845 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-catalog-content\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:12 crc kubenswrapper[4624]: I0228 04:33:12.932020 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-utilities\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:12 crc kubenswrapper[4624]: I0228 04:33:12.932065 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vg7k\" (UniqueName: \"kubernetes.io/projected/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-kube-api-access-4vg7k\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.033681 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-utilities\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.033737 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vg7k\" (UniqueName: \"kubernetes.io/projected/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-kube-api-access-4vg7k\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.033863 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-catalog-content\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.034720 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-utilities\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.034737 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-catalog-content\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.055151 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vg7k\" (UniqueName: \"kubernetes.io/projected/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-kube-api-access-4vg7k\") pod \"community-operators-qrzwz\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.164657 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:13 crc kubenswrapper[4624]: I0228 04:33:13.732277 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrzwz"] Feb 28 04:33:14 crc kubenswrapper[4624]: I0228 04:33:14.543133 4624 generic.go:334] "Generic (PLEG): container finished" podID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerID="4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481" exitCode=0 Feb 28 04:33:14 crc kubenswrapper[4624]: I0228 04:33:14.544587 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerDied","Data":"4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481"} Feb 28 04:33:14 crc kubenswrapper[4624]: I0228 04:33:14.544671 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerStarted","Data":"9ae3b439372ff2d8ee67a0c868bdd130a736bcbcc787f8fbbe94e01d900c15f8"} Feb 28 04:33:14 crc kubenswrapper[4624]: I0228 04:33:14.547189 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:33:15 crc kubenswrapper[4624]: I0228 04:33:15.556749 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerStarted","Data":"9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482"} Feb 28 04:33:17 crc kubenswrapper[4624]: I0228 04:33:17.576377 4624 generic.go:334] "Generic (PLEG): container finished" podID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerID="9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482" exitCode=0 Feb 28 04:33:17 crc kubenswrapper[4624]: I0228 04:33:17.576454 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerDied","Data":"9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482"} Feb 28 04:33:18 crc kubenswrapper[4624]: I0228 04:33:18.591682 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerStarted","Data":"6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e"} Feb 28 04:33:18 crc kubenswrapper[4624]: I0228 04:33:18.624262 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrzwz" podStartSLOduration=3.151750265 podStartE2EDuration="6.624231511s" podCreationTimestamp="2026-02-28 04:33:12 +0000 UTC" firstStartedPulling="2026-02-28 04:33:14.546976659 +0000 UTC m=+3449.211015968" lastFinishedPulling="2026-02-28 04:33:18.019457885 +0000 UTC m=+3452.683497214" observedRunningTime="2026-02-28 04:33:18.612684189 +0000 UTC m=+3453.276723498" watchObservedRunningTime="2026-02-28 04:33:18.624231511 +0000 UTC m=+3453.288270820" Feb 28 04:33:19 crc kubenswrapper[4624]: I0228 04:33:19.539822 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:33:19 crc kubenswrapper[4624]: I0228 04:33:19.539907 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:33:20 crc kubenswrapper[4624]: I0228 04:33:20.610030 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:33:20 crc kubenswrapper[4624]: I0228 04:33:20.666148 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:33:21 crc kubenswrapper[4624]: I0228 04:33:21.219444 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q25ld"] Feb 28 04:33:22 crc kubenswrapper[4624]: I0228 04:33:22.654938 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q25ld" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="registry-server" containerID="cri-o://e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286" gracePeriod=2 Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.165010 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.165934 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.206456 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.215793 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.290487 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g88lr\" (UniqueName: \"kubernetes.io/projected/43e38251-7447-446f-950e-c3ea42d758c1-kube-api-access-g88lr\") pod \"43e38251-7447-446f-950e-c3ea42d758c1\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.290803 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-catalog-content\") pod \"43e38251-7447-446f-950e-c3ea42d758c1\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.290901 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-utilities\") pod \"43e38251-7447-446f-950e-c3ea42d758c1\" (UID: \"43e38251-7447-446f-950e-c3ea42d758c1\") " Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.291804 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-utilities" (OuterVolumeSpecName: "utilities") pod "43e38251-7447-446f-950e-c3ea42d758c1" (UID: "43e38251-7447-446f-950e-c3ea42d758c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.322712 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e38251-7447-446f-950e-c3ea42d758c1-kube-api-access-g88lr" (OuterVolumeSpecName: "kube-api-access-g88lr") pod "43e38251-7447-446f-950e-c3ea42d758c1" (UID: "43e38251-7447-446f-950e-c3ea42d758c1"). InnerVolumeSpecName "kube-api-access-g88lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.395201 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.395262 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g88lr\" (UniqueName: \"kubernetes.io/projected/43e38251-7447-446f-950e-c3ea42d758c1-kube-api-access-g88lr\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.458004 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43e38251-7447-446f-950e-c3ea42d758c1" (UID: "43e38251-7447-446f-950e-c3ea42d758c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.497727 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43e38251-7447-446f-950e-c3ea42d758c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.619850 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rtwz7"] Feb 28 04:33:23 crc kubenswrapper[4624]: E0228 04:33:23.620208 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="extract-content" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.620226 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="extract-content" Feb 28 04:33:23 crc kubenswrapper[4624]: E0228 04:33:23.620246 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="registry-server" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.620253 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="registry-server" Feb 28 04:33:23 crc kubenswrapper[4624]: E0228 04:33:23.620268 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="extract-utilities" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.620275 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="extract-utilities" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.620463 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e38251-7447-446f-950e-c3ea42d758c1" containerName="registry-server" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.621704 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.638113 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtwz7"] Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.670040 4624 generic.go:334] "Generic (PLEG): container finished" podID="43e38251-7447-446f-950e-c3ea42d758c1" containerID="e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286" exitCode=0 Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.671068 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q25ld" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.672207 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerDied","Data":"e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286"} Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.672262 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q25ld" event={"ID":"43e38251-7447-446f-950e-c3ea42d758c1","Type":"ContainerDied","Data":"bbbe37bc377abb10e6ef3007754e28569749736ee78862f815d194da4405766b"} Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.672293 4624 scope.go:117] "RemoveContainer" containerID="e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.706163 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-catalog-content\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.706220 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-utilities\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.706305 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57h2n\" (UniqueName: \"kubernetes.io/projected/443ec606-4a15-42fc-98a1-fad22d33053b-kube-api-access-57h2n\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.712625 4624 scope.go:117] "RemoveContainer" containerID="182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.712872 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q25ld"] Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.729890 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q25ld"] Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.742230 4624 scope.go:117] "RemoveContainer" containerID="8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.755652 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.809274 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57h2n\" (UniqueName: \"kubernetes.io/projected/443ec606-4a15-42fc-98a1-fad22d33053b-kube-api-access-57h2n\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.809429 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-catalog-content\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.809451 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-utilities\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.809897 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-utilities\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.810774 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-catalog-content\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.822625 4624 scope.go:117] "RemoveContainer" containerID="e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286" Feb 28 04:33:23 crc kubenswrapper[4624]: E0228 04:33:23.825582 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286\": container with ID starting with e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286 not found: ID does not exist" containerID="e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.825633 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286"} err="failed to get container status \"e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286\": rpc error: code = NotFound desc = could not find container \"e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286\": container with ID starting with e0b4e21c23c33af1b6b76b3533e033b261473db9dc9877cf35fd8b2031e59286 not found: ID does not exist" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.825664 4624 scope.go:117] "RemoveContainer" containerID="182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.827899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57h2n\" (UniqueName: \"kubernetes.io/projected/443ec606-4a15-42fc-98a1-fad22d33053b-kube-api-access-57h2n\") pod \"certified-operators-rtwz7\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:23 crc kubenswrapper[4624]: E0228 04:33:23.832238 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e\": container with ID starting with 182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e not found: ID does not exist" containerID="182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.832285 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e"} err="failed to get container status \"182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e\": rpc error: code = NotFound desc = could not find container \"182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e\": container with ID starting with 182a54e4d68189e1c4163b09a4d187e9fd122f8665e7e6fcd2e32c6d1452572e not found: ID does not exist" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.832311 4624 scope.go:117] "RemoveContainer" containerID="8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5" Feb 28 04:33:23 crc kubenswrapper[4624]: E0228 04:33:23.832790 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5\": container with ID starting with 8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5 not found: ID does not exist" containerID="8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.832808 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5"} err="failed to get container status \"8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5\": rpc error: code = NotFound desc = could not find container \"8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5\": container with ID starting with 8c02a35b52388c0e975b3f011a26f7eb25ee3622af217563f05b45dc149d50d5 not found: ID does not exist" Feb 28 04:33:23 crc kubenswrapper[4624]: I0228 04:33:23.941216 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:24 crc kubenswrapper[4624]: I0228 04:33:24.178446 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e38251-7447-446f-950e-c3ea42d758c1" path="/var/lib/kubelet/pods/43e38251-7447-446f-950e-c3ea42d758c1/volumes" Feb 28 04:33:24 crc kubenswrapper[4624]: I0228 04:33:24.595348 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rtwz7"] Feb 28 04:33:24 crc kubenswrapper[4624]: I0228 04:33:24.711489 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerStarted","Data":"1aa6d0dc37724eb7fdc7272fafe474b0cddc699b6d17e280f49b40921a3e81b5"} Feb 28 04:33:25 crc kubenswrapper[4624]: I0228 04:33:25.725368 4624 generic.go:334] "Generic (PLEG): container finished" podID="443ec606-4a15-42fc-98a1-fad22d33053b" containerID="ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed" exitCode=0 Feb 28 04:33:25 crc kubenswrapper[4624]: I0228 04:33:25.725446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerDied","Data":"ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed"} Feb 28 04:33:25 crc kubenswrapper[4624]: I0228 04:33:25.813239 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrzwz"] Feb 28 04:33:25 crc kubenswrapper[4624]: I0228 04:33:25.813486 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrzwz" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="registry-server" containerID="cri-o://6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e" gracePeriod=2 Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.365876 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.483388 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-utilities\") pod \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.483843 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-catalog-content\") pod \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.484036 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vg7k\" (UniqueName: \"kubernetes.io/projected/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-kube-api-access-4vg7k\") pod \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\" (UID: \"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f\") " Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.485036 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-utilities" (OuterVolumeSpecName: "utilities") pod "c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" (UID: "c8a2c453-3c69-46d8-a6c8-954cb03e3b9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.507652 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-kube-api-access-4vg7k" (OuterVolumeSpecName: "kube-api-access-4vg7k") pod "c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" (UID: "c8a2c453-3c69-46d8-a6c8-954cb03e3b9f"). InnerVolumeSpecName "kube-api-access-4vg7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.530291 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/util/0.log" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.563133 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" (UID: "c8a2c453-3c69-46d8-a6c8-954cb03e3b9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.586413 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.586444 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.586454 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vg7k\" (UniqueName: \"kubernetes.io/projected/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f-kube-api-access-4vg7k\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.737718 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerStarted","Data":"4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef"} Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.748953 4624 generic.go:334] "Generic (PLEG): container finished" podID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerID="6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e" exitCode=0 Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.749018 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerDied","Data":"6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e"} Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.749062 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrzwz" event={"ID":"c8a2c453-3c69-46d8-a6c8-954cb03e3b9f","Type":"ContainerDied","Data":"9ae3b439372ff2d8ee67a0c868bdd130a736bcbcc787f8fbbe94e01d900c15f8"} Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.749119 4624 scope.go:117] "RemoveContainer" containerID="6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.749314 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrzwz" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.812100 4624 scope.go:117] "RemoveContainer" containerID="9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.815671 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrzwz"] Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.828550 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrzwz"] Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.842215 4624 scope.go:117] "RemoveContainer" containerID="4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.868664 4624 scope.go:117] "RemoveContainer" containerID="6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e" Feb 28 04:33:26 crc kubenswrapper[4624]: E0228 04:33:26.869116 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e\": container with ID starting with 6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e not found: ID does not exist" containerID="6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.869152 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e"} err="failed to get container status \"6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e\": rpc error: code = NotFound desc = could not find container \"6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e\": container with ID starting with 6322bfea6bc0bf207a46cff43c2e7084072ef23a2f2b0b71d057a23b4141112e not found: ID does not exist" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.869182 4624 scope.go:117] "RemoveContainer" containerID="9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482" Feb 28 04:33:26 crc kubenswrapper[4624]: E0228 04:33:26.870668 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482\": container with ID starting with 9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482 not found: ID does not exist" containerID="9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.870724 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482"} err="failed to get container status \"9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482\": rpc error: code = NotFound desc = could not find container \"9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482\": container with ID starting with 9f8cb28197b1d22fedecf203d1b4177c40d355b7a2bb8fb656ab06e7ffdf7482 not found: ID does not exist" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.870762 4624 scope.go:117] "RemoveContainer" containerID="4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481" Feb 28 04:33:26 crc kubenswrapper[4624]: E0228 04:33:26.871030 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481\": container with ID starting with 4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481 not found: ID does not exist" containerID="4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.871059 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481"} err="failed to get container status \"4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481\": rpc error: code = NotFound desc = could not find container \"4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481\": container with ID starting with 4dc274dd091c437d9556eac66f6e3ec44cd813415fcccdfdf193f0f6faebe481 not found: ID does not exist" Feb 28 04:33:26 crc kubenswrapper[4624]: I0228 04:33:26.998722 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/util/0.log" Feb 28 04:33:27 crc kubenswrapper[4624]: I0228 04:33:27.005939 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/pull/0.log" Feb 28 04:33:27 crc kubenswrapper[4624]: I0228 04:33:27.053459 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/pull/0.log" Feb 28 04:33:27 crc kubenswrapper[4624]: I0228 04:33:27.407307 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/extract/0.log" Feb 28 04:33:27 crc kubenswrapper[4624]: I0228 04:33:27.407677 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/pull/0.log" Feb 28 04:33:27 crc kubenswrapper[4624]: I0228 04:33:27.413999 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/util/0.log" Feb 28 04:33:27 crc kubenswrapper[4624]: I0228 04:33:27.991831 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-5rdgn_f62e258f-732a-4da1-8670-475725509310/manager/0.log" Feb 28 04:33:28 crc kubenswrapper[4624]: I0228 04:33:28.105827 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" path="/var/lib/kubelet/pods/c8a2c453-3c69-46d8-a6c8-954cb03e3b9f/volumes" Feb 28 04:33:28 crc kubenswrapper[4624]: I0228 04:33:28.424033 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7f748f8b74-vdhdb_a5b6c7a0-a640-4faa-836c-7c5d0c29acd9/manager/0.log" Feb 28 04:33:28 crc kubenswrapper[4624]: I0228 04:33:28.798310 4624 generic.go:334] "Generic (PLEG): container finished" podID="443ec606-4a15-42fc-98a1-fad22d33053b" containerID="4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef" exitCode=0 Feb 28 04:33:28 crc kubenswrapper[4624]: I0228 04:33:28.798381 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerDied","Data":"4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef"} Feb 28 04:33:28 crc kubenswrapper[4624]: I0228 04:33:28.843224 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-585b788787-b97cm_8507d808-bdf4-47f7-adb9-e3746c4768bf/manager/0.log" Feb 28 04:33:29 crc kubenswrapper[4624]: I0228 04:33:29.283305 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7db95d7ffb-k68gx_5013f14b-e7ba-400b-8a1e-d187991a0e49/manager/0.log" Feb 28 04:33:29 crc kubenswrapper[4624]: I0228 04:33:29.417208 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-768c8b45bb-gkxmg_50df7aca-97ff-41dc-92cc-143cb02acea8/manager/0.log" Feb 28 04:33:29 crc kubenswrapper[4624]: I0228 04:33:29.833690 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerStarted","Data":"5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc"} Feb 28 04:33:29 crc kubenswrapper[4624]: I0228 04:33:29.856597 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rtwz7" podStartSLOduration=3.367444534 podStartE2EDuration="6.856573116s" podCreationTimestamp="2026-02-28 04:33:23 +0000 UTC" firstStartedPulling="2026-02-28 04:33:25.727673541 +0000 UTC m=+3460.391712850" lastFinishedPulling="2026-02-28 04:33:29.216802123 +0000 UTC m=+3463.880841432" observedRunningTime="2026-02-28 04:33:29.85523709 +0000 UTC m=+3464.519276399" watchObservedRunningTime="2026-02-28 04:33:29.856573116 +0000 UTC m=+3464.520612425" Feb 28 04:33:29 crc kubenswrapper[4624]: I0228 04:33:29.981739 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-8784b4656-8vq2d_2ef92f5a-9f82-40fb-81a2-c4a75aec60cf/manager/0.log" Feb 28 04:33:30 crc kubenswrapper[4624]: I0228 04:33:30.253065 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-c77466965-f8x9g_a96b8e7a-1320-4ede-9f43-ec80e2d562c9/manager/0.log" Feb 28 04:33:30 crc kubenswrapper[4624]: I0228 04:33:30.577074 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78b64779b9-rvz6s_84190c06-4523-4d3d-ab8c-cec0aca7c393/manager/0.log" Feb 28 04:33:30 crc kubenswrapper[4624]: I0228 04:33:30.640765 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76fd76856-knmpc_3e549f4d-18e0-49cf-a82e-efde664ab810/manager/0.log" Feb 28 04:33:31 crc kubenswrapper[4624]: I0228 04:33:31.073376 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-745fc45789-tvr7t_f3c08c1c-5646-48e9-9c9a-537b7619ecb0/manager/0.log" Feb 28 04:33:31 crc kubenswrapper[4624]: I0228 04:33:31.358595 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-768f998cf4-dv9vf_6497616d-eb08-4bd4-b3a0-8ee000cdfe47/manager/0.log" Feb 28 04:33:31 crc kubenswrapper[4624]: I0228 04:33:31.518459 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c67ff7674-psffj_797119fd-2208-40d7-86c8-594e59529182/manager/0.log" Feb 28 04:33:31 crc kubenswrapper[4624]: I0228 04:33:31.655055 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-cc79fdffd-xw2s7_8cb779fb-ff77-468c-9198-065b3e4bf393/manager/0.log" Feb 28 04:33:31 crc kubenswrapper[4624]: I0228 04:33:31.882756 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67_9776c87a-53fb-404c-8bbe-0fbeb07eda0d/manager/0.log" Feb 28 04:33:32 crc kubenswrapper[4624]: I0228 04:33:32.181858 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-596b9db54c-pdc78_58d2ada8-fb04-4054-bba9-e2742bddbce5/operator/0.log" Feb 28 04:33:32 crc kubenswrapper[4624]: I0228 04:33:32.506612 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6r9np_5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be/registry-server/0.log" Feb 28 04:33:32 crc kubenswrapper[4624]: I0228 04:33:32.568414 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-999d845f-jrsj4_4833820c-a44e-4eb4-8716-bab85def7811/manager/0.log" Feb 28 04:33:32 crc kubenswrapper[4624]: I0228 04:33:32.717857 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-684c7d77b-c6gww_c6a151b1-0add-4b07-aa32-9a9e0dc2f526/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.199654 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-bff955cc4-x8vll_582d0963-7f3a-4664-85e4-9148c495eb1a/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.224128 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7d47r_154dfd82-a449-4812-bdd5-3e9c8a474b3d/operator/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.624415 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c5dbcf94c-psgpc_61521bf4-1381-4fe8-a9d3-0948ebaa1ca6/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.652862 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-tgr2z_d6722506-d5dd-4fb4-b81a-d27c5dab59dd/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.690639 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-55f4bf89cb-54l7x_b26b01f4-0d96-4a5b-bb71-58d691b92119/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.878673 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-65c9f4f6b-7w84p_195b486b-92db-481a-9478-7a3edfeb79ae/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.898662 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-lfkwt_39ee7326-c4c7-4dee-a749-35da4ff62746/manager/0.log" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.942489 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.942560 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:33 crc kubenswrapper[4624]: I0228 04:33:33.995687 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:34 crc kubenswrapper[4624]: I0228 04:33:34.959831 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:35 crc kubenswrapper[4624]: I0228 04:33:35.018329 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtwz7"] Feb 28 04:33:36 crc kubenswrapper[4624]: I0228 04:33:36.903980 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rtwz7" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="registry-server" containerID="cri-o://5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc" gracePeriod=2 Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.467705 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.526360 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57h2n\" (UniqueName: \"kubernetes.io/projected/443ec606-4a15-42fc-98a1-fad22d33053b-kube-api-access-57h2n\") pod \"443ec606-4a15-42fc-98a1-fad22d33053b\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.526459 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-catalog-content\") pod \"443ec606-4a15-42fc-98a1-fad22d33053b\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.526608 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-utilities\") pod \"443ec606-4a15-42fc-98a1-fad22d33053b\" (UID: \"443ec606-4a15-42fc-98a1-fad22d33053b\") " Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.528401 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-utilities" (OuterVolumeSpecName: "utilities") pod "443ec606-4a15-42fc-98a1-fad22d33053b" (UID: "443ec606-4a15-42fc-98a1-fad22d33053b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.545139 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443ec606-4a15-42fc-98a1-fad22d33053b-kube-api-access-57h2n" (OuterVolumeSpecName: "kube-api-access-57h2n") pod "443ec606-4a15-42fc-98a1-fad22d33053b" (UID: "443ec606-4a15-42fc-98a1-fad22d33053b"). InnerVolumeSpecName "kube-api-access-57h2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.603728 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "443ec606-4a15-42fc-98a1-fad22d33053b" (UID: "443ec606-4a15-42fc-98a1-fad22d33053b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.629331 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57h2n\" (UniqueName: \"kubernetes.io/projected/443ec606-4a15-42fc-98a1-fad22d33053b-kube-api-access-57h2n\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.629375 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.629387 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/443ec606-4a15-42fc-98a1-fad22d33053b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.917711 4624 generic.go:334] "Generic (PLEG): container finished" podID="443ec606-4a15-42fc-98a1-fad22d33053b" containerID="5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc" exitCode=0 Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.917761 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerDied","Data":"5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc"} Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.917786 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rtwz7" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.917801 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rtwz7" event={"ID":"443ec606-4a15-42fc-98a1-fad22d33053b","Type":"ContainerDied","Data":"1aa6d0dc37724eb7fdc7272fafe474b0cddc699b6d17e280f49b40921a3e81b5"} Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.917832 4624 scope.go:117] "RemoveContainer" containerID="5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.943644 4624 scope.go:117] "RemoveContainer" containerID="4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.969015 4624 scope.go:117] "RemoveContainer" containerID="ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed" Feb 28 04:33:37 crc kubenswrapper[4624]: I0228 04:33:37.994013 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rtwz7"] Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.005183 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rtwz7"] Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.017029 4624 scope.go:117] "RemoveContainer" containerID="5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc" Feb 28 04:33:38 crc kubenswrapper[4624]: E0228 04:33:38.023952 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc\": container with ID starting with 5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc not found: ID does not exist" containerID="5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc" Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.024005 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc"} err="failed to get container status \"5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc\": rpc error: code = NotFound desc = could not find container \"5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc\": container with ID starting with 5f455ef2677314cfd56109697a8fba62ead1c4687e9cbbd8f2b9013a9f2178bc not found: ID does not exist" Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.024033 4624 scope.go:117] "RemoveContainer" containerID="4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef" Feb 28 04:33:38 crc kubenswrapper[4624]: E0228 04:33:38.024542 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef\": container with ID starting with 4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef not found: ID does not exist" containerID="4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef" Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.024601 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef"} err="failed to get container status \"4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef\": rpc error: code = NotFound desc = could not find container \"4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef\": container with ID starting with 4be06b81b41d3d0befcd2ec167721b6e34edd7f99cf307fd6632c509a316c7ef not found: ID does not exist" Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.024634 4624 scope.go:117] "RemoveContainer" containerID="ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed" Feb 28 04:33:38 crc kubenswrapper[4624]: E0228 04:33:38.024979 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed\": container with ID starting with ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed not found: ID does not exist" containerID="ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed" Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.025001 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed"} err="failed to get container status \"ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed\": rpc error: code = NotFound desc = could not find container \"ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed\": container with ID starting with ec1caee8ac5663cbfc2389203fd527f23b2ef8abae8e83c3c0143d3c0ef17bed not found: ID does not exist" Feb 28 04:33:38 crc kubenswrapper[4624]: I0228 04:33:38.103041 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" path="/var/lib/kubelet/pods/443ec606-4a15-42fc-98a1-fad22d33053b/volumes" Feb 28 04:33:49 crc kubenswrapper[4624]: I0228 04:33:49.540433 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:33:49 crc kubenswrapper[4624]: I0228 04:33:49.541460 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:33:49 crc kubenswrapper[4624]: I0228 04:33:49.541554 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:33:49 crc kubenswrapper[4624]: I0228 04:33:49.542939 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06e581f0888923edc4bc489a70ad03e776ef8104d0b2056afc183dca47bf121f"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:33:49 crc kubenswrapper[4624]: I0228 04:33:49.543039 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://06e581f0888923edc4bc489a70ad03e776ef8104d0b2056afc183dca47bf121f" gracePeriod=600 Feb 28 04:33:50 crc kubenswrapper[4624]: I0228 04:33:50.077914 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="06e581f0888923edc4bc489a70ad03e776ef8104d0b2056afc183dca47bf121f" exitCode=0 Feb 28 04:33:50 crc kubenswrapper[4624]: I0228 04:33:50.078130 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"06e581f0888923edc4bc489a70ad03e776ef8104d0b2056afc183dca47bf121f"} Feb 28 04:33:50 crc kubenswrapper[4624]: I0228 04:33:50.078828 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a"} Feb 28 04:33:50 crc kubenswrapper[4624]: I0228 04:33:50.078861 4624 scope.go:117] "RemoveContainer" containerID="4800282847b017d00a6b766e4c89348c3862a7222e6b68181524342f636f80d4" Feb 28 04:33:58 crc kubenswrapper[4624]: I0228 04:33:58.286303 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-m4qb9_d3ac81ca-3efe-4112-a8d0-9503bd1826b7/control-plane-machine-set-operator/0.log" Feb 28 04:33:58 crc kubenswrapper[4624]: I0228 04:33:58.504324 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nhzzm_b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d/machine-api-operator/0.log" Feb 28 04:33:58 crc kubenswrapper[4624]: I0228 04:33:58.536423 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nhzzm_b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d/kube-rbac-proxy/0.log" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.153971 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537554-nxxxz"] Feb 28 04:34:00 crc kubenswrapper[4624]: E0228 04:34:00.155511 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="extract-content" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155530 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="extract-content" Feb 28 04:34:00 crc kubenswrapper[4624]: E0228 04:34:00.155539 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="extract-utilities" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155547 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="extract-utilities" Feb 28 04:34:00 crc kubenswrapper[4624]: E0228 04:34:00.155569 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="registry-server" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155578 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="registry-server" Feb 28 04:34:00 crc kubenswrapper[4624]: E0228 04:34:00.155596 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="extract-utilities" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155602 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="extract-utilities" Feb 28 04:34:00 crc kubenswrapper[4624]: E0228 04:34:00.155614 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="registry-server" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155620 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="registry-server" Feb 28 04:34:00 crc kubenswrapper[4624]: E0228 04:34:00.155651 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="extract-content" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155657 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="extract-content" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155880 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="443ec606-4a15-42fc-98a1-fad22d33053b" containerName="registry-server" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.155903 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a2c453-3c69-46d8-a6c8-954cb03e3b9f" containerName="registry-server" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.156780 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.164932 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.165193 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.166202 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.186501 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537554-nxxxz"] Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.196166 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7z76\" (UniqueName: \"kubernetes.io/projected/63857fca-5639-49e7-b239-739262af2762-kube-api-access-b7z76\") pod \"auto-csr-approver-29537554-nxxxz\" (UID: \"63857fca-5639-49e7-b239-739262af2762\") " pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.298066 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7z76\" (UniqueName: \"kubernetes.io/projected/63857fca-5639-49e7-b239-739262af2762-kube-api-access-b7z76\") pod \"auto-csr-approver-29537554-nxxxz\" (UID: \"63857fca-5639-49e7-b239-739262af2762\") " pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.320261 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7z76\" (UniqueName: \"kubernetes.io/projected/63857fca-5639-49e7-b239-739262af2762-kube-api-access-b7z76\") pod \"auto-csr-approver-29537554-nxxxz\" (UID: \"63857fca-5639-49e7-b239-739262af2762\") " pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.475647 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:00 crc kubenswrapper[4624]: I0228 04:34:00.971556 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537554-nxxxz"] Feb 28 04:34:01 crc kubenswrapper[4624]: I0228 04:34:01.224982 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" event={"ID":"63857fca-5639-49e7-b239-739262af2762","Type":"ContainerStarted","Data":"d58f6084aa7052eb73ba1a405286d46db158c6fdd5647dc0d905be45b8bf3627"} Feb 28 04:34:03 crc kubenswrapper[4624]: I0228 04:34:03.253012 4624 generic.go:334] "Generic (PLEG): container finished" podID="63857fca-5639-49e7-b239-739262af2762" containerID="4f6c45448264a031e2ac7f59209e07ef343e093df02f12a8f4f62d2c58791233" exitCode=0 Feb 28 04:34:03 crc kubenswrapper[4624]: I0228 04:34:03.253191 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" event={"ID":"63857fca-5639-49e7-b239-739262af2762","Type":"ContainerDied","Data":"4f6c45448264a031e2ac7f59209e07ef343e093df02f12a8f4f62d2c58791233"} Feb 28 04:34:04 crc kubenswrapper[4624]: I0228 04:34:04.657545 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:04 crc kubenswrapper[4624]: I0228 04:34:04.734704 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7z76\" (UniqueName: \"kubernetes.io/projected/63857fca-5639-49e7-b239-739262af2762-kube-api-access-b7z76\") pod \"63857fca-5639-49e7-b239-739262af2762\" (UID: \"63857fca-5639-49e7-b239-739262af2762\") " Feb 28 04:34:04 crc kubenswrapper[4624]: I0228 04:34:04.756498 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63857fca-5639-49e7-b239-739262af2762-kube-api-access-b7z76" (OuterVolumeSpecName: "kube-api-access-b7z76") pod "63857fca-5639-49e7-b239-739262af2762" (UID: "63857fca-5639-49e7-b239-739262af2762"). InnerVolumeSpecName "kube-api-access-b7z76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:34:04 crc kubenswrapper[4624]: I0228 04:34:04.838672 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7z76\" (UniqueName: \"kubernetes.io/projected/63857fca-5639-49e7-b239-739262af2762-kube-api-access-b7z76\") on node \"crc\" DevicePath \"\"" Feb 28 04:34:05 crc kubenswrapper[4624]: I0228 04:34:05.281499 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" event={"ID":"63857fca-5639-49e7-b239-739262af2762","Type":"ContainerDied","Data":"d58f6084aa7052eb73ba1a405286d46db158c6fdd5647dc0d905be45b8bf3627"} Feb 28 04:34:05 crc kubenswrapper[4624]: I0228 04:34:05.281569 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58f6084aa7052eb73ba1a405286d46db158c6fdd5647dc0d905be45b8bf3627" Feb 28 04:34:05 crc kubenswrapper[4624]: I0228 04:34:05.281627 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537554-nxxxz" Feb 28 04:34:05 crc kubenswrapper[4624]: I0228 04:34:05.779662 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537548-dkvwp"] Feb 28 04:34:05 crc kubenswrapper[4624]: I0228 04:34:05.791394 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537548-dkvwp"] Feb 28 04:34:06 crc kubenswrapper[4624]: I0228 04:34:06.101194 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264bd128-b339-4e6f-a690-70565410014f" path="/var/lib/kubelet/pods/264bd128-b339-4e6f-a690-70565410014f/volumes" Feb 28 04:34:14 crc kubenswrapper[4624]: I0228 04:34:14.078709 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pbztg_c3c9f58c-1f61-4731-b062-8bc0f3044e68/cert-manager-controller/0.log" Feb 28 04:34:14 crc kubenswrapper[4624]: I0228 04:34:14.395862 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-t4xf2_18477b71-69e7-4103-949d-4c377e3f9246/cert-manager-cainjector/0.log" Feb 28 04:34:14 crc kubenswrapper[4624]: I0228 04:34:14.469298 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q7sw6_4ca9316a-d88d-402c-a943-f858bc793848/cert-manager-webhook/0.log" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.330443 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7hsts"] Feb 28 04:34:20 crc kubenswrapper[4624]: E0228 04:34:20.332498 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63857fca-5639-49e7-b239-739262af2762" containerName="oc" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.332519 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="63857fca-5639-49e7-b239-739262af2762" containerName="oc" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.333065 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="63857fca-5639-49e7-b239-739262af2762" containerName="oc" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.357897 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.386162 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7hsts"] Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.459531 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-utilities\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.459570 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-catalog-content\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.459670 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrwr\" (UniqueName: \"kubernetes.io/projected/3e23c46d-29e6-4cbb-8434-478f5dead2f9-kube-api-access-rkrwr\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.561482 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrwr\" (UniqueName: \"kubernetes.io/projected/3e23c46d-29e6-4cbb-8434-478f5dead2f9-kube-api-access-rkrwr\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.561664 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-utilities\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.561690 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-catalog-content\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.562337 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-catalog-content\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.562946 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-utilities\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.584902 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrwr\" (UniqueName: \"kubernetes.io/projected/3e23c46d-29e6-4cbb-8434-478f5dead2f9-kube-api-access-rkrwr\") pod \"redhat-marketplace-7hsts\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:20 crc kubenswrapper[4624]: I0228 04:34:20.695773 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:21 crc kubenswrapper[4624]: I0228 04:34:21.316937 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7hsts"] Feb 28 04:34:21 crc kubenswrapper[4624]: I0228 04:34:21.474845 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerStarted","Data":"48d6884d67750e07727118b7912fcbfaaaf3e2fccd2dd4c4087780a9a19216ce"} Feb 28 04:34:22 crc kubenswrapper[4624]: I0228 04:34:22.488023 4624 generic.go:334] "Generic (PLEG): container finished" podID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerID="3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069" exitCode=0 Feb 28 04:34:22 crc kubenswrapper[4624]: I0228 04:34:22.488165 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerDied","Data":"3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069"} Feb 28 04:34:23 crc kubenswrapper[4624]: I0228 04:34:23.501834 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerStarted","Data":"0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf"} Feb 28 04:34:24 crc kubenswrapper[4624]: I0228 04:34:24.516881 4624 generic.go:334] "Generic (PLEG): container finished" podID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerID="0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf" exitCode=0 Feb 28 04:34:24 crc kubenswrapper[4624]: I0228 04:34:24.516998 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerDied","Data":"0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf"} Feb 28 04:34:25 crc kubenswrapper[4624]: I0228 04:34:25.531886 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerStarted","Data":"30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e"} Feb 28 04:34:25 crc kubenswrapper[4624]: I0228 04:34:25.558457 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7hsts" podStartSLOduration=3.038108844 podStartE2EDuration="5.558428289s" podCreationTimestamp="2026-02-28 04:34:20 +0000 UTC" firstStartedPulling="2026-02-28 04:34:22.490650949 +0000 UTC m=+3517.154690298" lastFinishedPulling="2026-02-28 04:34:25.010970434 +0000 UTC m=+3519.675009743" observedRunningTime="2026-02-28 04:34:25.554784761 +0000 UTC m=+3520.218824090" watchObservedRunningTime="2026-02-28 04:34:25.558428289 +0000 UTC m=+3520.222467638" Feb 28 04:34:30 crc kubenswrapper[4624]: I0228 04:34:30.696517 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:30 crc kubenswrapper[4624]: I0228 04:34:30.697586 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:30 crc kubenswrapper[4624]: I0228 04:34:30.715633 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-mnlv7_04995dc6-8837-4a1f-91df-bc058d0fb961/nmstate-console-plugin/0.log" Feb 28 04:34:30 crc kubenswrapper[4624]: I0228 04:34:30.759649 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:30 crc kubenswrapper[4624]: I0228 04:34:30.972779 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-75frj_3abaedfc-0055-4d3d-a10c-0adf10cf8f52/nmstate-handler/0.log" Feb 28 04:34:31 crc kubenswrapper[4624]: I0228 04:34:31.037812 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-pd9k9_66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a/kube-rbac-proxy/0.log" Feb 28 04:34:31 crc kubenswrapper[4624]: I0228 04:34:31.191245 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-pd9k9_66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a/nmstate-metrics/0.log" Feb 28 04:34:31 crc kubenswrapper[4624]: I0228 04:34:31.310417 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-wtfks_54bc88fe-b7dc-43a1-b64b-60723eb0cf7c/nmstate-operator/0.log" Feb 28 04:34:31 crc kubenswrapper[4624]: I0228 04:34:31.442905 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-ctxdh_2f662548-c391-4399-adba-8fa556360cf8/nmstate-webhook/0.log" Feb 28 04:34:31 crc kubenswrapper[4624]: I0228 04:34:31.660188 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:31 crc kubenswrapper[4624]: I0228 04:34:31.710545 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7hsts"] Feb 28 04:34:33 crc kubenswrapper[4624]: I0228 04:34:33.611996 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7hsts" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="registry-server" containerID="cri-o://30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e" gracePeriod=2 Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.607494 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.623942 4624 generic.go:334] "Generic (PLEG): container finished" podID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerID="30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e" exitCode=0 Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.624014 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerDied","Data":"30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e"} Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.624052 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7hsts" event={"ID":"3e23c46d-29e6-4cbb-8434-478f5dead2f9","Type":"ContainerDied","Data":"48d6884d67750e07727118b7912fcbfaaaf3e2fccd2dd4c4087780a9a19216ce"} Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.624047 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7hsts" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.624074 4624 scope.go:117] "RemoveContainer" containerID="30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.664333 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkrwr\" (UniqueName: \"kubernetes.io/projected/3e23c46d-29e6-4cbb-8434-478f5dead2f9-kube-api-access-rkrwr\") pod \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.664491 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-catalog-content\") pod \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.664734 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-utilities\") pod \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\" (UID: \"3e23c46d-29e6-4cbb-8434-478f5dead2f9\") " Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.672065 4624 scope.go:117] "RemoveContainer" containerID="0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.674228 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-utilities" (OuterVolumeSpecName: "utilities") pod "3e23c46d-29e6-4cbb-8434-478f5dead2f9" (UID: "3e23c46d-29e6-4cbb-8434-478f5dead2f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.685333 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e23c46d-29e6-4cbb-8434-478f5dead2f9-kube-api-access-rkrwr" (OuterVolumeSpecName: "kube-api-access-rkrwr") pod "3e23c46d-29e6-4cbb-8434-478f5dead2f9" (UID: "3e23c46d-29e6-4cbb-8434-478f5dead2f9"). InnerVolumeSpecName "kube-api-access-rkrwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.712725 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e23c46d-29e6-4cbb-8434-478f5dead2f9" (UID: "3e23c46d-29e6-4cbb-8434-478f5dead2f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.752577 4624 scope.go:117] "RemoveContainer" containerID="3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.768365 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkrwr\" (UniqueName: \"kubernetes.io/projected/3e23c46d-29e6-4cbb-8434-478f5dead2f9-kube-api-access-rkrwr\") on node \"crc\" DevicePath \"\"" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.768398 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.768412 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e23c46d-29e6-4cbb-8434-478f5dead2f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.793863 4624 scope.go:117] "RemoveContainer" containerID="30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e" Feb 28 04:34:34 crc kubenswrapper[4624]: E0228 04:34:34.797553 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e\": container with ID starting with 30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e not found: ID does not exist" containerID="30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.797600 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e"} err="failed to get container status \"30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e\": rpc error: code = NotFound desc = could not find container \"30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e\": container with ID starting with 30e2e5ef846a093d924fb219cc46c087981d0ed28cf7b6fad3c0db97f258a14e not found: ID does not exist" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.797627 4624 scope.go:117] "RemoveContainer" containerID="0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf" Feb 28 04:34:34 crc kubenswrapper[4624]: E0228 04:34:34.797969 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf\": container with ID starting with 0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf not found: ID does not exist" containerID="0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.798006 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf"} err="failed to get container status \"0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf\": rpc error: code = NotFound desc = could not find container \"0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf\": container with ID starting with 0f9f89aaa1857147d2e2e67fc14f885344487845d4d31eca276a00bcb9cec5cf not found: ID does not exist" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.798020 4624 scope.go:117] "RemoveContainer" containerID="3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069" Feb 28 04:34:34 crc kubenswrapper[4624]: E0228 04:34:34.798266 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069\": container with ID starting with 3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069 not found: ID does not exist" containerID="3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.798292 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069"} err="failed to get container status \"3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069\": rpc error: code = NotFound desc = could not find container \"3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069\": container with ID starting with 3124120c0f0316b480519694be9ba624f90a7f0564628a5faba591e664040069 not found: ID does not exist" Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.964988 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7hsts"] Feb 28 04:34:34 crc kubenswrapper[4624]: I0228 04:34:34.974405 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7hsts"] Feb 28 04:34:36 crc kubenswrapper[4624]: I0228 04:34:36.100698 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" path="/var/lib/kubelet/pods/3e23c46d-29e6-4cbb-8434-478f5dead2f9/volumes" Feb 28 04:35:03 crc kubenswrapper[4624]: I0228 04:35:03.526346 4624 scope.go:117] "RemoveContainer" containerID="f834e427af34aef08248657b263f74859608e960e26cee4c35d7514af684a07d" Feb 28 04:35:07 crc kubenswrapper[4624]: I0228 04:35:07.683230 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-dj7kv_0633733e-c39d-4767-883b-e1b16be08190/kube-rbac-proxy/0.log" Feb 28 04:35:07 crc kubenswrapper[4624]: I0228 04:35:07.790359 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-dj7kv_0633733e-c39d-4767-883b-e1b16be08190/controller/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.033606 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.147907 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.202034 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.232283 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.324218 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.606043 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.631137 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.648645 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.741127 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.911690 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.987327 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:35:08 crc kubenswrapper[4624]: I0228 04:35:08.992011 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.045520 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/controller/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.279639 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/kube-rbac-proxy/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.285971 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/frr-metrics/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.411735 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/kube-rbac-proxy-frr/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.638845 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/reloader/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.775187 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-8dm5w_a14d415d-3a62-412a-8c98-13543a8bb573/frr-k8s-webhook-server/0.log" Feb 28 04:35:09 crc kubenswrapper[4624]: I0228 04:35:09.983271 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6647b8f4b6-mkvwl_1ad85c59-61bb-4658-8e2a-cdd409e54b3d/manager/0.log" Feb 28 04:35:10 crc kubenswrapper[4624]: I0228 04:35:10.740457 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/frr/0.log" Feb 28 04:35:10 crc kubenswrapper[4624]: I0228 04:35:10.818474 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8qstw_ebe5dd19-2d46-4a69-9847-bc91d0cd4423/kube-rbac-proxy/0.log" Feb 28 04:35:10 crc kubenswrapper[4624]: I0228 04:35:10.820866 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-694dbf9577-jnbcr_7be65953-83ce-403e-aac6-443ced5b772b/webhook-server/0.log" Feb 28 04:35:11 crc kubenswrapper[4624]: I0228 04:35:11.384676 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8qstw_ebe5dd19-2d46-4a69-9847-bc91d0cd4423/speaker/0.log" Feb 28 04:35:28 crc kubenswrapper[4624]: I0228 04:35:28.744981 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/util/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.367298 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/util/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.376216 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/pull/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.413976 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/pull/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.654204 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/util/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.681062 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/pull/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.707604 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/extract/0.log" Feb 28 04:35:29 crc kubenswrapper[4624]: I0228 04:35:29.886060 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-utilities/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.159532 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-content/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.213282 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-content/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.250537 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-utilities/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.481586 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-content/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.510665 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-utilities/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.793356 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-utilities/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.980007 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/registry-server/0.log" Feb 28 04:35:30 crc kubenswrapper[4624]: I0228 04:35:30.999743 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-utilities/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.035959 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-content/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.119761 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-content/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.265977 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-content/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.291732 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-utilities/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.531819 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/util/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.803949 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/registry-server/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.852354 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/util/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.940999 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/pull/0.log" Feb 28 04:35:31 crc kubenswrapper[4624]: I0228 04:35:31.943720 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/pull/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.163044 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/util/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.217172 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/extract/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.227015 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/pull/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.368996 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-m6bvr_08a27942-dc8c-4905-b3d3-7202aae79787/marketplace-operator/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.506360 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-utilities/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.723141 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-content/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.729271 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-content/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.771512 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-utilities/0.log" Feb 28 04:35:32 crc kubenswrapper[4624]: I0228 04:35:32.978823 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-utilities/0.log" Feb 28 04:35:33 crc kubenswrapper[4624]: I0228 04:35:33.072362 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-content/0.log" Feb 28 04:35:33 crc kubenswrapper[4624]: I0228 04:35:33.188545 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/registry-server/0.log" Feb 28 04:35:33 crc kubenswrapper[4624]: I0228 04:35:33.285226 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-utilities/0.log" Feb 28 04:35:33 crc kubenswrapper[4624]: I0228 04:35:33.854059 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-utilities/0.log" Feb 28 04:35:33 crc kubenswrapper[4624]: I0228 04:35:33.919215 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-content/0.log" Feb 28 04:35:33 crc kubenswrapper[4624]: I0228 04:35:33.965528 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-content/0.log" Feb 28 04:35:34 crc kubenswrapper[4624]: I0228 04:35:34.164418 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-content/0.log" Feb 28 04:35:34 crc kubenswrapper[4624]: I0228 04:35:34.165776 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-utilities/0.log" Feb 28 04:35:34 crc kubenswrapper[4624]: I0228 04:35:34.678328 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/registry-server/0.log" Feb 28 04:35:49 crc kubenswrapper[4624]: I0228 04:35:49.540024 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:35:49 crc kubenswrapper[4624]: I0228 04:35:49.541713 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.157802 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537556-9746t"] Feb 28 04:36:00 crc kubenswrapper[4624]: E0228 04:36:00.175951 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="extract-content" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.177886 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="extract-content" Feb 28 04:36:00 crc kubenswrapper[4624]: E0228 04:36:00.178932 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="extract-utilities" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.179834 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="extract-utilities" Feb 28 04:36:00 crc kubenswrapper[4624]: E0228 04:36:00.180274 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="registry-server" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.180395 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="registry-server" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.181765 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e23c46d-29e6-4cbb-8434-478f5dead2f9" containerName="registry-server" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.184792 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-9746t"] Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.184924 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.188287 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.188546 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.188884 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.276994 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8nw\" (UniqueName: \"kubernetes.io/projected/68c4b277-9001-4956-8409-0cf1c869ba60-kube-api-access-5n8nw\") pod \"auto-csr-approver-29537556-9746t\" (UID: \"68c4b277-9001-4956-8409-0cf1c869ba60\") " pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.378706 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8nw\" (UniqueName: \"kubernetes.io/projected/68c4b277-9001-4956-8409-0cf1c869ba60-kube-api-access-5n8nw\") pod \"auto-csr-approver-29537556-9746t\" (UID: \"68c4b277-9001-4956-8409-0cf1c869ba60\") " pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.422043 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8nw\" (UniqueName: \"kubernetes.io/projected/68c4b277-9001-4956-8409-0cf1c869ba60-kube-api-access-5n8nw\") pod \"auto-csr-approver-29537556-9746t\" (UID: \"68c4b277-9001-4956-8409-0cf1c869ba60\") " pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:00 crc kubenswrapper[4624]: I0228 04:36:00.514011 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:01 crc kubenswrapper[4624]: I0228 04:36:01.125605 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-9746t"] Feb 28 04:36:01 crc kubenswrapper[4624]: I0228 04:36:01.488861 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-9746t" event={"ID":"68c4b277-9001-4956-8409-0cf1c869ba60","Type":"ContainerStarted","Data":"3792fe1fd05982977b71c58f3df7c43f625801aedcd089a28d7c63a64e9a9604"} Feb 28 04:36:03 crc kubenswrapper[4624]: I0228 04:36:03.528629 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-9746t" event={"ID":"68c4b277-9001-4956-8409-0cf1c869ba60","Type":"ContainerStarted","Data":"b339232fb963690aa772c2594e3248170c0d1fe3a1dd0bc7b6381d937d831f76"} Feb 28 04:36:03 crc kubenswrapper[4624]: I0228 04:36:03.557228 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537556-9746t" podStartSLOduration=2.526911263 podStartE2EDuration="3.557209444s" podCreationTimestamp="2026-02-28 04:36:00 +0000 UTC" firstStartedPulling="2026-02-28 04:36:01.142681994 +0000 UTC m=+3615.806721303" lastFinishedPulling="2026-02-28 04:36:02.172980185 +0000 UTC m=+3616.837019484" observedRunningTime="2026-02-28 04:36:03.547508072 +0000 UTC m=+3618.211547391" watchObservedRunningTime="2026-02-28 04:36:03.557209444 +0000 UTC m=+3618.221248753" Feb 28 04:36:04 crc kubenswrapper[4624]: I0228 04:36:04.537765 4624 generic.go:334] "Generic (PLEG): container finished" podID="68c4b277-9001-4956-8409-0cf1c869ba60" containerID="b339232fb963690aa772c2594e3248170c0d1fe3a1dd0bc7b6381d937d831f76" exitCode=0 Feb 28 04:36:04 crc kubenswrapper[4624]: I0228 04:36:04.537813 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-9746t" event={"ID":"68c4b277-9001-4956-8409-0cf1c869ba60","Type":"ContainerDied","Data":"b339232fb963690aa772c2594e3248170c0d1fe3a1dd0bc7b6381d937d831f76"} Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.047179 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.220851 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n8nw\" (UniqueName: \"kubernetes.io/projected/68c4b277-9001-4956-8409-0cf1c869ba60-kube-api-access-5n8nw\") pod \"68c4b277-9001-4956-8409-0cf1c869ba60\" (UID: \"68c4b277-9001-4956-8409-0cf1c869ba60\") " Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.228317 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c4b277-9001-4956-8409-0cf1c869ba60-kube-api-access-5n8nw" (OuterVolumeSpecName: "kube-api-access-5n8nw") pod "68c4b277-9001-4956-8409-0cf1c869ba60" (UID: "68c4b277-9001-4956-8409-0cf1c869ba60"). InnerVolumeSpecName "kube-api-access-5n8nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.325468 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n8nw\" (UniqueName: \"kubernetes.io/projected/68c4b277-9001-4956-8409-0cf1c869ba60-kube-api-access-5n8nw\") on node \"crc\" DevicePath \"\"" Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.553976 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-9746t" event={"ID":"68c4b277-9001-4956-8409-0cf1c869ba60","Type":"ContainerDied","Data":"3792fe1fd05982977b71c58f3df7c43f625801aedcd089a28d7c63a64e9a9604"} Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.554016 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3792fe1fd05982977b71c58f3df7c43f625801aedcd089a28d7c63a64e9a9604" Feb 28 04:36:06 crc kubenswrapper[4624]: I0228 04:36:06.554068 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-9746t" Feb 28 04:36:07 crc kubenswrapper[4624]: I0228 04:36:07.133333 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537550-gm6zl"] Feb 28 04:36:07 crc kubenswrapper[4624]: I0228 04:36:07.144055 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537550-gm6zl"] Feb 28 04:36:08 crc kubenswrapper[4624]: I0228 04:36:08.116381 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2a6fe1-f278-439b-9f6a-3f72b9247d15" path="/var/lib/kubelet/pods/dd2a6fe1-f278-439b-9f6a-3f72b9247d15/volumes" Feb 28 04:36:19 crc kubenswrapper[4624]: I0228 04:36:19.541412 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:36:19 crc kubenswrapper[4624]: I0228 04:36:19.542130 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:36:49 crc kubenswrapper[4624]: I0228 04:36:49.539817 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:36:49 crc kubenswrapper[4624]: I0228 04:36:49.540523 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:36:49 crc kubenswrapper[4624]: I0228 04:36:49.540599 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:36:49 crc kubenswrapper[4624]: I0228 04:36:49.541793 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:36:49 crc kubenswrapper[4624]: I0228 04:36:49.541900 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" gracePeriod=600 Feb 28 04:36:49 crc kubenswrapper[4624]: E0228 04:36:49.675614 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:36:50 crc kubenswrapper[4624]: I0228 04:36:50.081303 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" exitCode=0 Feb 28 04:36:50 crc kubenswrapper[4624]: I0228 04:36:50.082054 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a"} Feb 28 04:36:50 crc kubenswrapper[4624]: I0228 04:36:50.082206 4624 scope.go:117] "RemoveContainer" containerID="06e581f0888923edc4bc489a70ad03e776ef8104d0b2056afc183dca47bf121f" Feb 28 04:36:50 crc kubenswrapper[4624]: I0228 04:36:50.082746 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:36:50 crc kubenswrapper[4624]: E0228 04:36:50.083100 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:37:01 crc kubenswrapper[4624]: I0228 04:37:01.088714 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:37:01 crc kubenswrapper[4624]: E0228 04:37:01.089976 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:37:03 crc kubenswrapper[4624]: I0228 04:37:03.644069 4624 scope.go:117] "RemoveContainer" containerID="ad1b7710e185408ebfc0ae02c0bf96cfa4285f30cbd667df5003de6853956e54" Feb 28 04:37:14 crc kubenswrapper[4624]: I0228 04:37:14.087683 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:37:14 crc kubenswrapper[4624]: E0228 04:37:14.088660 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:37:29 crc kubenswrapper[4624]: I0228 04:37:29.087748 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:37:29 crc kubenswrapper[4624]: E0228 04:37:29.089117 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:37:40 crc kubenswrapper[4624]: I0228 04:37:40.088125 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:37:40 crc kubenswrapper[4624]: E0228 04:37:40.088961 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:37:40 crc kubenswrapper[4624]: I0228 04:37:40.779782 4624 generic.go:334] "Generic (PLEG): container finished" podID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerID="dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423" exitCode=0 Feb 28 04:37:40 crc kubenswrapper[4624]: I0228 04:37:40.779837 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" event={"ID":"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c","Type":"ContainerDied","Data":"dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423"} Feb 28 04:37:40 crc kubenswrapper[4624]: I0228 04:37:40.781049 4624 scope.go:117] "RemoveContainer" containerID="dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423" Feb 28 04:37:41 crc kubenswrapper[4624]: I0228 04:37:41.852259 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v6hw7_must-gather-rz4fg_d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c/gather/0.log" Feb 28 04:37:49 crc kubenswrapper[4624]: I0228 04:37:49.781317 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v6hw7/must-gather-rz4fg"] Feb 28 04:37:49 crc kubenswrapper[4624]: I0228 04:37:49.782361 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="copy" containerID="cri-o://dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853" gracePeriod=2 Feb 28 04:37:49 crc kubenswrapper[4624]: I0228 04:37:49.791671 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v6hw7/must-gather-rz4fg"] Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.238125 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v6hw7_must-gather-rz4fg_d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c/copy/0.log" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.238614 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.383683 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j67n\" (UniqueName: \"kubernetes.io/projected/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-kube-api-access-4j67n\") pod \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.384068 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-must-gather-output\") pod \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\" (UID: \"d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c\") " Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.391312 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-kube-api-access-4j67n" (OuterVolumeSpecName: "kube-api-access-4j67n") pod "d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" (UID: "d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c"). InnerVolumeSpecName "kube-api-access-4j67n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.486312 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j67n\" (UniqueName: \"kubernetes.io/projected/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-kube-api-access-4j67n\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.568237 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" (UID: "d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.589467 4624 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.883765 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v6hw7_must-gather-rz4fg_d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c/copy/0.log" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.884274 4624 generic.go:334] "Generic (PLEG): container finished" podID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerID="dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853" exitCode=143 Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.884337 4624 scope.go:117] "RemoveContainer" containerID="dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.884376 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v6hw7/must-gather-rz4fg" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.906093 4624 scope.go:117] "RemoveContainer" containerID="dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.973882 4624 scope.go:117] "RemoveContainer" containerID="dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853" Feb 28 04:37:50 crc kubenswrapper[4624]: E0228 04:37:50.974432 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853\": container with ID starting with dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853 not found: ID does not exist" containerID="dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.974476 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853"} err="failed to get container status \"dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853\": rpc error: code = NotFound desc = could not find container \"dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853\": container with ID starting with dc7e156bce9904dc0eb68056ef2b8325eaa3f26f102a916fa5540c9b39d80853 not found: ID does not exist" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.974508 4624 scope.go:117] "RemoveContainer" containerID="dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423" Feb 28 04:37:50 crc kubenswrapper[4624]: E0228 04:37:50.975167 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423\": container with ID starting with dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423 not found: ID does not exist" containerID="dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423" Feb 28 04:37:50 crc kubenswrapper[4624]: I0228 04:37:50.975244 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423"} err="failed to get container status \"dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423\": rpc error: code = NotFound desc = could not find container \"dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423\": container with ID starting with dcbc2476a37917095be78ab3ad2548fbcd7bfcfc0ea9fa49b88764ddcc455423 not found: ID does not exist" Feb 28 04:37:52 crc kubenswrapper[4624]: I0228 04:37:52.098305 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" path="/var/lib/kubelet/pods/d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c/volumes" Feb 28 04:37:55 crc kubenswrapper[4624]: I0228 04:37:55.087830 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:37:55 crc kubenswrapper[4624]: E0228 04:37:55.088590 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.149670 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537558-6x88z"] Feb 28 04:38:00 crc kubenswrapper[4624]: E0228 04:38:00.151202 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="copy" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.151222 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="copy" Feb 28 04:38:00 crc kubenswrapper[4624]: E0228 04:38:00.151242 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c4b277-9001-4956-8409-0cf1c869ba60" containerName="oc" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.151249 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c4b277-9001-4956-8409-0cf1c869ba60" containerName="oc" Feb 28 04:38:00 crc kubenswrapper[4624]: E0228 04:38:00.151274 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="gather" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.151280 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="gather" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.151564 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="copy" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.151586 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce6671-9f17-4cc4-9fc8-6137cdad3d4c" containerName="gather" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.151606 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c4b277-9001-4956-8409-0cf1c869ba60" containerName="oc" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.152603 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.155685 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.155732 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.156019 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.159397 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-6x88z"] Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.312240 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-855mn\" (UniqueName: \"kubernetes.io/projected/86262e59-aa73-4e1c-bfcb-61a9df1886b3-kube-api-access-855mn\") pod \"auto-csr-approver-29537558-6x88z\" (UID: \"86262e59-aa73-4e1c-bfcb-61a9df1886b3\") " pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.415010 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-855mn\" (UniqueName: \"kubernetes.io/projected/86262e59-aa73-4e1c-bfcb-61a9df1886b3-kube-api-access-855mn\") pod \"auto-csr-approver-29537558-6x88z\" (UID: \"86262e59-aa73-4e1c-bfcb-61a9df1886b3\") " pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.436041 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-855mn\" (UniqueName: \"kubernetes.io/projected/86262e59-aa73-4e1c-bfcb-61a9df1886b3-kube-api-access-855mn\") pod \"auto-csr-approver-29537558-6x88z\" (UID: \"86262e59-aa73-4e1c-bfcb-61a9df1886b3\") " pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.479772 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:00 crc kubenswrapper[4624]: I0228 04:38:00.981240 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-6x88z"] Feb 28 04:38:02 crc kubenswrapper[4624]: I0228 04:38:02.018666 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537558-6x88z" event={"ID":"86262e59-aa73-4e1c-bfcb-61a9df1886b3","Type":"ContainerStarted","Data":"a262980b3ee2d15ab73d79f7d763da01f3a3412c3b82d33ba3a72080cbac1206"} Feb 28 04:38:03 crc kubenswrapper[4624]: I0228 04:38:03.029612 4624 generic.go:334] "Generic (PLEG): container finished" podID="86262e59-aa73-4e1c-bfcb-61a9df1886b3" containerID="5f66edaddde02cfa72d469fc64ecb27da439eaf4a4e3174b26ed22f5371a169b" exitCode=0 Feb 28 04:38:03 crc kubenswrapper[4624]: I0228 04:38:03.029804 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537558-6x88z" event={"ID":"86262e59-aa73-4e1c-bfcb-61a9df1886b3","Type":"ContainerDied","Data":"5f66edaddde02cfa72d469fc64ecb27da439eaf4a4e3174b26ed22f5371a169b"} Feb 28 04:38:03 crc kubenswrapper[4624]: I0228 04:38:03.752041 4624 scope.go:117] "RemoveContainer" containerID="3db8626ce96ec0a1d41f14407cb40deb34af87e7f385716eeb7cb32819f334d9" Feb 28 04:38:03 crc kubenswrapper[4624]: I0228 04:38:03.789498 4624 scope.go:117] "RemoveContainer" containerID="5d708ac2062b525cf5c210286792ffed78243d42996d5de2b6298456a2366d83" Feb 28 04:38:04 crc kubenswrapper[4624]: I0228 04:38:04.363313 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:04 crc kubenswrapper[4624]: I0228 04:38:04.507562 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-855mn\" (UniqueName: \"kubernetes.io/projected/86262e59-aa73-4e1c-bfcb-61a9df1886b3-kube-api-access-855mn\") pod \"86262e59-aa73-4e1c-bfcb-61a9df1886b3\" (UID: \"86262e59-aa73-4e1c-bfcb-61a9df1886b3\") " Feb 28 04:38:04 crc kubenswrapper[4624]: I0228 04:38:04.512567 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86262e59-aa73-4e1c-bfcb-61a9df1886b3-kube-api-access-855mn" (OuterVolumeSpecName: "kube-api-access-855mn") pod "86262e59-aa73-4e1c-bfcb-61a9df1886b3" (UID: "86262e59-aa73-4e1c-bfcb-61a9df1886b3"). InnerVolumeSpecName "kube-api-access-855mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:04 crc kubenswrapper[4624]: I0228 04:38:04.610192 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-855mn\" (UniqueName: \"kubernetes.io/projected/86262e59-aa73-4e1c-bfcb-61a9df1886b3-kube-api-access-855mn\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:05 crc kubenswrapper[4624]: I0228 04:38:05.049211 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537558-6x88z" event={"ID":"86262e59-aa73-4e1c-bfcb-61a9df1886b3","Type":"ContainerDied","Data":"a262980b3ee2d15ab73d79f7d763da01f3a3412c3b82d33ba3a72080cbac1206"} Feb 28 04:38:05 crc kubenswrapper[4624]: I0228 04:38:05.049256 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a262980b3ee2d15ab73d79f7d763da01f3a3412c3b82d33ba3a72080cbac1206" Feb 28 04:38:05 crc kubenswrapper[4624]: I0228 04:38:05.049322 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-6x88z" Feb 28 04:38:05 crc kubenswrapper[4624]: I0228 04:38:05.436117 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537552-8n9x7"] Feb 28 04:38:05 crc kubenswrapper[4624]: I0228 04:38:05.446807 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537552-8n9x7"] Feb 28 04:38:06 crc kubenswrapper[4624]: I0228 04:38:06.107741 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420d340a-b4a1-4b60-9268-24770e57adb1" path="/var/lib/kubelet/pods/420d340a-b4a1-4b60-9268-24770e57adb1/volumes" Feb 28 04:38:09 crc kubenswrapper[4624]: I0228 04:38:09.089640 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:38:09 crc kubenswrapper[4624]: E0228 04:38:09.090497 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:38:23 crc kubenswrapper[4624]: I0228 04:38:23.087619 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:38:23 crc kubenswrapper[4624]: E0228 04:38:23.088861 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:38:30 crc kubenswrapper[4624]: I0228 04:38:30.765660 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-775c6bbdc-lvbk6" podUID="7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 28 04:38:34 crc kubenswrapper[4624]: I0228 04:38:34.087952 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:38:34 crc kubenswrapper[4624]: E0228 04:38:34.089015 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:38:45 crc kubenswrapper[4624]: I0228 04:38:45.087051 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:38:45 crc kubenswrapper[4624]: E0228 04:38:45.087738 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:38:57 crc kubenswrapper[4624]: I0228 04:38:57.088245 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:38:57 crc kubenswrapper[4624]: E0228 04:38:57.090770 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:39:03 crc kubenswrapper[4624]: I0228 04:39:03.909534 4624 scope.go:117] "RemoveContainer" containerID="3bc2b8d19dc04509aee4a70184dcc927359a21533ecee5ecb771b6846daaa737" Feb 28 04:39:12 crc kubenswrapper[4624]: I0228 04:39:12.089341 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:39:12 crc kubenswrapper[4624]: E0228 04:39:12.090986 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:39:23 crc kubenswrapper[4624]: I0228 04:39:23.087701 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:39:23 crc kubenswrapper[4624]: E0228 04:39:23.088741 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:39:38 crc kubenswrapper[4624]: I0228 04:39:38.087515 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:39:38 crc kubenswrapper[4624]: E0228 04:39:38.088366 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:39:52 crc kubenswrapper[4624]: I0228 04:39:52.090965 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:39:52 crc kubenswrapper[4624]: E0228 04:39:52.091886 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.152727 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537560-vtl95"] Feb 28 04:40:00 crc kubenswrapper[4624]: E0228 04:40:00.153744 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86262e59-aa73-4e1c-bfcb-61a9df1886b3" containerName="oc" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.153757 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="86262e59-aa73-4e1c-bfcb-61a9df1886b3" containerName="oc" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.153964 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="86262e59-aa73-4e1c-bfcb-61a9df1886b3" containerName="oc" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.154695 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.157250 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.157507 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.157796 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.171705 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-vtl95"] Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.269805 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4t7r\" (UniqueName: \"kubernetes.io/projected/f2600acf-2ea0-4398-97a8-a2aeae548e6a-kube-api-access-l4t7r\") pod \"auto-csr-approver-29537560-vtl95\" (UID: \"f2600acf-2ea0-4398-97a8-a2aeae548e6a\") " pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.372153 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4t7r\" (UniqueName: \"kubernetes.io/projected/f2600acf-2ea0-4398-97a8-a2aeae548e6a-kube-api-access-l4t7r\") pod \"auto-csr-approver-29537560-vtl95\" (UID: \"f2600acf-2ea0-4398-97a8-a2aeae548e6a\") " pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.396899 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4t7r\" (UniqueName: \"kubernetes.io/projected/f2600acf-2ea0-4398-97a8-a2aeae548e6a-kube-api-access-l4t7r\") pod \"auto-csr-approver-29537560-vtl95\" (UID: \"f2600acf-2ea0-4398-97a8-a2aeae548e6a\") " pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.482516 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.976827 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-vtl95"] Feb 28 04:40:00 crc kubenswrapper[4624]: W0228 04:40:00.981182 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2600acf_2ea0_4398_97a8_a2aeae548e6a.slice/crio-6a49dcebbff026e39a255963ce2e74962dd23bd5dd80bb3f879e37189d8357bb WatchSource:0}: Error finding container 6a49dcebbff026e39a255963ce2e74962dd23bd5dd80bb3f879e37189d8357bb: Status 404 returned error can't find the container with id 6a49dcebbff026e39a255963ce2e74962dd23bd5dd80bb3f879e37189d8357bb Feb 28 04:40:00 crc kubenswrapper[4624]: I0228 04:40:00.984643 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:40:01 crc kubenswrapper[4624]: I0228 04:40:01.775251 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-vtl95" event={"ID":"f2600acf-2ea0-4398-97a8-a2aeae548e6a","Type":"ContainerStarted","Data":"6a49dcebbff026e39a255963ce2e74962dd23bd5dd80bb3f879e37189d8357bb"} Feb 28 04:40:02 crc kubenswrapper[4624]: I0228 04:40:02.784997 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-vtl95" event={"ID":"f2600acf-2ea0-4398-97a8-a2aeae548e6a","Type":"ContainerStarted","Data":"a2fef17f63c9c8d81b27d338cb4b9a508029dc25caf9d7fe65aef7471d15addb"} Feb 28 04:40:02 crc kubenswrapper[4624]: I0228 04:40:02.804390 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537560-vtl95" podStartSLOduration=1.456859735 podStartE2EDuration="2.80437053s" podCreationTimestamp="2026-02-28 04:40:00 +0000 UTC" firstStartedPulling="2026-02-28 04:40:00.984317218 +0000 UTC m=+3855.648356527" lastFinishedPulling="2026-02-28 04:40:02.331828003 +0000 UTC m=+3856.995867322" observedRunningTime="2026-02-28 04:40:02.8028866 +0000 UTC m=+3857.466925909" watchObservedRunningTime="2026-02-28 04:40:02.80437053 +0000 UTC m=+3857.468409849" Feb 28 04:40:03 crc kubenswrapper[4624]: I0228 04:40:03.798610 4624 generic.go:334] "Generic (PLEG): container finished" podID="f2600acf-2ea0-4398-97a8-a2aeae548e6a" containerID="a2fef17f63c9c8d81b27d338cb4b9a508029dc25caf9d7fe65aef7471d15addb" exitCode=0 Feb 28 04:40:03 crc kubenswrapper[4624]: I0228 04:40:03.798668 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-vtl95" event={"ID":"f2600acf-2ea0-4398-97a8-a2aeae548e6a","Type":"ContainerDied","Data":"a2fef17f63c9c8d81b27d338cb4b9a508029dc25caf9d7fe65aef7471d15addb"} Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.196255 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.288972 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4t7r\" (UniqueName: \"kubernetes.io/projected/f2600acf-2ea0-4398-97a8-a2aeae548e6a-kube-api-access-l4t7r\") pod \"f2600acf-2ea0-4398-97a8-a2aeae548e6a\" (UID: \"f2600acf-2ea0-4398-97a8-a2aeae548e6a\") " Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.295047 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2600acf-2ea0-4398-97a8-a2aeae548e6a-kube-api-access-l4t7r" (OuterVolumeSpecName: "kube-api-access-l4t7r") pod "f2600acf-2ea0-4398-97a8-a2aeae548e6a" (UID: "f2600acf-2ea0-4398-97a8-a2aeae548e6a"). InnerVolumeSpecName "kube-api-access-l4t7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.391534 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4t7r\" (UniqueName: \"kubernetes.io/projected/f2600acf-2ea0-4398-97a8-a2aeae548e6a-kube-api-access-l4t7r\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.816198 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-vtl95" event={"ID":"f2600acf-2ea0-4398-97a8-a2aeae548e6a","Type":"ContainerDied","Data":"6a49dcebbff026e39a255963ce2e74962dd23bd5dd80bb3f879e37189d8357bb"} Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.816244 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a49dcebbff026e39a255963ce2e74962dd23bd5dd80bb3f879e37189d8357bb" Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.816249 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-vtl95" Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.886209 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537554-nxxxz"] Feb 28 04:40:05 crc kubenswrapper[4624]: I0228 04:40:05.895958 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537554-nxxxz"] Feb 28 04:40:06 crc kubenswrapper[4624]: I0228 04:40:06.105058 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63857fca-5639-49e7-b239-739262af2762" path="/var/lib/kubelet/pods/63857fca-5639-49e7-b239-739262af2762/volumes" Feb 28 04:40:06 crc kubenswrapper[4624]: I0228 04:40:06.105561 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:40:06 crc kubenswrapper[4624]: E0228 04:40:06.105897 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:40:21 crc kubenswrapper[4624]: I0228 04:40:21.088756 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:40:21 crc kubenswrapper[4624]: E0228 04:40:21.090182 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:40:34 crc kubenswrapper[4624]: I0228 04:40:34.087021 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:40:34 crc kubenswrapper[4624]: E0228 04:40:34.087884 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.088289 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:40:48 crc kubenswrapper[4624]: E0228 04:40:48.089253 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.630888 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zrs82/must-gather-8hw26"] Feb 28 04:40:48 crc kubenswrapper[4624]: E0228 04:40:48.631474 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2600acf-2ea0-4398-97a8-a2aeae548e6a" containerName="oc" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.631496 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2600acf-2ea0-4398-97a8-a2aeae548e6a" containerName="oc" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.631696 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2600acf-2ea0-4398-97a8-a2aeae548e6a" containerName="oc" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.632703 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.634855 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zrs82"/"openshift-service-ca.crt" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.635050 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zrs82"/"kube-root-ca.crt" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.641652 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zrs82/must-gather-8hw26"] Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.646438 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zrs82"/"default-dockercfg-x57jk" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.679223 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-must-gather-output\") pod \"must-gather-8hw26\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.679334 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml94m\" (UniqueName: \"kubernetes.io/projected/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-kube-api-access-ml94m\") pod \"must-gather-8hw26\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.781365 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-must-gather-output\") pod \"must-gather-8hw26\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.781438 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml94m\" (UniqueName: \"kubernetes.io/projected/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-kube-api-access-ml94m\") pod \"must-gather-8hw26\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.782194 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-must-gather-output\") pod \"must-gather-8hw26\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.807293 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml94m\" (UniqueName: \"kubernetes.io/projected/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-kube-api-access-ml94m\") pod \"must-gather-8hw26\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:48 crc kubenswrapper[4624]: I0228 04:40:48.952194 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:40:49 crc kubenswrapper[4624]: I0228 04:40:49.429498 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zrs82/must-gather-8hw26"] Feb 28 04:40:50 crc kubenswrapper[4624]: I0228 04:40:50.289396 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/must-gather-8hw26" event={"ID":"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec","Type":"ContainerStarted","Data":"b6807c908b273bcf0c28f0a5f37ba67430d8e82aaf9f2b0d7b83548efe0844a7"} Feb 28 04:40:50 crc kubenswrapper[4624]: I0228 04:40:50.289823 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/must-gather-8hw26" event={"ID":"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec","Type":"ContainerStarted","Data":"690b2051232960e2ae8d304a90855f1773a80109db67607d79ef5067e749b2ec"} Feb 28 04:40:50 crc kubenswrapper[4624]: I0228 04:40:50.289841 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/must-gather-8hw26" event={"ID":"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec","Type":"ContainerStarted","Data":"6cdaf362f1552703d6bbf80aa265e3f01b8c2a53e18b8c9cea0bc6a91844ef35"} Feb 28 04:40:50 crc kubenswrapper[4624]: I0228 04:40:50.310889 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zrs82/must-gather-8hw26" podStartSLOduration=2.310866389 podStartE2EDuration="2.310866389s" podCreationTimestamp="2026-02-28 04:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:40:50.304416585 +0000 UTC m=+3904.968455894" watchObservedRunningTime="2026-02-28 04:40:50.310866389 +0000 UTC m=+3904.974905698" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.695203 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zrs82/crc-debug-t7mvw"] Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.696951 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.791675 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/12e439ac-5af4-423d-8d36-d5da639b87f4-kube-api-access-skrs7\") pod \"crc-debug-t7mvw\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.792399 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/12e439ac-5af4-423d-8d36-d5da639b87f4-host\") pod \"crc-debug-t7mvw\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.894418 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/12e439ac-5af4-423d-8d36-d5da639b87f4-host\") pod \"crc-debug-t7mvw\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.894516 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/12e439ac-5af4-423d-8d36-d5da639b87f4-kube-api-access-skrs7\") pod \"crc-debug-t7mvw\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.894631 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/12e439ac-5af4-423d-8d36-d5da639b87f4-host\") pod \"crc-debug-t7mvw\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:53 crc kubenswrapper[4624]: I0228 04:40:53.914251 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/12e439ac-5af4-423d-8d36-d5da639b87f4-kube-api-access-skrs7\") pod \"crc-debug-t7mvw\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:54 crc kubenswrapper[4624]: I0228 04:40:54.017160 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:40:54 crc kubenswrapper[4624]: W0228 04:40:54.052446 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12e439ac_5af4_423d_8d36_d5da639b87f4.slice/crio-c691d05147980f682de825290c1bb20abcc14f0608d6c56956555a9b78befc90 WatchSource:0}: Error finding container c691d05147980f682de825290c1bb20abcc14f0608d6c56956555a9b78befc90: Status 404 returned error can't find the container with id c691d05147980f682de825290c1bb20abcc14f0608d6c56956555a9b78befc90 Feb 28 04:40:54 crc kubenswrapper[4624]: I0228 04:40:54.324762 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" event={"ID":"12e439ac-5af4-423d-8d36-d5da639b87f4","Type":"ContainerStarted","Data":"9e4b4dca62196265e37eed5040869306c64ed027a01f20abd2c4a7c2de0fd7aa"} Feb 28 04:40:54 crc kubenswrapper[4624]: I0228 04:40:54.324809 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" event={"ID":"12e439ac-5af4-423d-8d36-d5da639b87f4","Type":"ContainerStarted","Data":"c691d05147980f682de825290c1bb20abcc14f0608d6c56956555a9b78befc90"} Feb 28 04:40:54 crc kubenswrapper[4624]: I0228 04:40:54.360847 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" podStartSLOduration=1.360817094 podStartE2EDuration="1.360817094s" podCreationTimestamp="2026-02-28 04:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:40:54.34804909 +0000 UTC m=+3909.012088399" watchObservedRunningTime="2026-02-28 04:40:54.360817094 +0000 UTC m=+3909.024856403" Feb 28 04:41:00 crc kubenswrapper[4624]: I0228 04:41:00.140841 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:41:00 crc kubenswrapper[4624]: E0228 04:41:00.141578 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:41:04 crc kubenswrapper[4624]: I0228 04:41:04.052769 4624 scope.go:117] "RemoveContainer" containerID="4f6c45448264a031e2ac7f59209e07ef343e093df02f12a8f4f62d2c58791233" Feb 28 04:41:11 crc kubenswrapper[4624]: I0228 04:41:11.087433 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:41:11 crc kubenswrapper[4624]: E0228 04:41:11.088999 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:41:25 crc kubenswrapper[4624]: I0228 04:41:25.086636 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:41:25 crc kubenswrapper[4624]: E0228 04:41:25.087370 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:41:31 crc kubenswrapper[4624]: I0228 04:41:31.677843 4624 generic.go:334] "Generic (PLEG): container finished" podID="12e439ac-5af4-423d-8d36-d5da639b87f4" containerID="9e4b4dca62196265e37eed5040869306c64ed027a01f20abd2c4a7c2de0fd7aa" exitCode=0 Feb 28 04:41:31 crc kubenswrapper[4624]: I0228 04:41:31.677976 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" event={"ID":"12e439ac-5af4-423d-8d36-d5da639b87f4","Type":"ContainerDied","Data":"9e4b4dca62196265e37eed5040869306c64ed027a01f20abd2c4a7c2de0fd7aa"} Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.172132 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.251596 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zrs82/crc-debug-t7mvw"] Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.264708 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zrs82/crc-debug-t7mvw"] Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.338924 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/12e439ac-5af4-423d-8d36-d5da639b87f4-kube-api-access-skrs7\") pod \"12e439ac-5af4-423d-8d36-d5da639b87f4\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.339200 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/12e439ac-5af4-423d-8d36-d5da639b87f4-host\") pod \"12e439ac-5af4-423d-8d36-d5da639b87f4\" (UID: \"12e439ac-5af4-423d-8d36-d5da639b87f4\") " Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.339712 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12e439ac-5af4-423d-8d36-d5da639b87f4-host" (OuterVolumeSpecName: "host") pod "12e439ac-5af4-423d-8d36-d5da639b87f4" (UID: "12e439ac-5af4-423d-8d36-d5da639b87f4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.346109 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e439ac-5af4-423d-8d36-d5da639b87f4-kube-api-access-skrs7" (OuterVolumeSpecName: "kube-api-access-skrs7") pod "12e439ac-5af4-423d-8d36-d5da639b87f4" (UID: "12e439ac-5af4-423d-8d36-d5da639b87f4"). InnerVolumeSpecName "kube-api-access-skrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.442144 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/12e439ac-5af4-423d-8d36-d5da639b87f4-host\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.442186 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/12e439ac-5af4-423d-8d36-d5da639b87f4-kube-api-access-skrs7\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.704788 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c691d05147980f682de825290c1bb20abcc14f0608d6c56956555a9b78befc90" Feb 28 04:41:33 crc kubenswrapper[4624]: I0228 04:41:33.704869 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-t7mvw" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.098264 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e439ac-5af4-423d-8d36-d5da639b87f4" path="/var/lib/kubelet/pods/12e439ac-5af4-423d-8d36-d5da639b87f4/volumes" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.699055 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zrs82/crc-debug-pcsq5"] Feb 28 04:41:34 crc kubenswrapper[4624]: E0228 04:41:34.700205 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e439ac-5af4-423d-8d36-d5da639b87f4" containerName="container-00" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.700223 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e439ac-5af4-423d-8d36-d5da639b87f4" containerName="container-00" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.700429 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e439ac-5af4-423d-8d36-d5da639b87f4" containerName="container-00" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.701068 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.869530 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkkch\" (UniqueName: \"kubernetes.io/projected/fd5bee4a-5348-44ca-9e43-f6c8511e4775-kube-api-access-jkkch\") pod \"crc-debug-pcsq5\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.869836 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd5bee4a-5348-44ca-9e43-f6c8511e4775-host\") pod \"crc-debug-pcsq5\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.971475 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkkch\" (UniqueName: \"kubernetes.io/projected/fd5bee4a-5348-44ca-9e43-f6c8511e4775-kube-api-access-jkkch\") pod \"crc-debug-pcsq5\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.971864 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd5bee4a-5348-44ca-9e43-f6c8511e4775-host\") pod \"crc-debug-pcsq5\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:34 crc kubenswrapper[4624]: I0228 04:41:34.971953 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd5bee4a-5348-44ca-9e43-f6c8511e4775-host\") pod \"crc-debug-pcsq5\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:35 crc kubenswrapper[4624]: I0228 04:41:35.001963 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkkch\" (UniqueName: \"kubernetes.io/projected/fd5bee4a-5348-44ca-9e43-f6c8511e4775-kube-api-access-jkkch\") pod \"crc-debug-pcsq5\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:35 crc kubenswrapper[4624]: I0228 04:41:35.023683 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:35 crc kubenswrapper[4624]: I0228 04:41:35.725445 4624 generic.go:334] "Generic (PLEG): container finished" podID="fd5bee4a-5348-44ca-9e43-f6c8511e4775" containerID="013ca0117a6fb3c4255eab4a78474f8ca0018781f7384e29a51f2a7d5074715e" exitCode=0 Feb 28 04:41:35 crc kubenswrapper[4624]: I0228 04:41:35.725816 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-pcsq5" event={"ID":"fd5bee4a-5348-44ca-9e43-f6c8511e4775","Type":"ContainerDied","Data":"013ca0117a6fb3c4255eab4a78474f8ca0018781f7384e29a51f2a7d5074715e"} Feb 28 04:41:35 crc kubenswrapper[4624]: I0228 04:41:35.725856 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-pcsq5" event={"ID":"fd5bee4a-5348-44ca-9e43-f6c8511e4775","Type":"ContainerStarted","Data":"b16c7d4848215fde585a0cb01e9d554065af9e780cf2556f289352b78240ba3f"} Feb 28 04:41:36 crc kubenswrapper[4624]: I0228 04:41:36.185862 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zrs82/crc-debug-pcsq5"] Feb 28 04:41:36 crc kubenswrapper[4624]: I0228 04:41:36.196552 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zrs82/crc-debug-pcsq5"] Feb 28 04:41:36 crc kubenswrapper[4624]: I0228 04:41:36.865056 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.023480 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd5bee4a-5348-44ca-9e43-f6c8511e4775-host\") pod \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.023686 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkkch\" (UniqueName: \"kubernetes.io/projected/fd5bee4a-5348-44ca-9e43-f6c8511e4775-kube-api-access-jkkch\") pod \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\" (UID: \"fd5bee4a-5348-44ca-9e43-f6c8511e4775\") " Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.023992 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd5bee4a-5348-44ca-9e43-f6c8511e4775-host" (OuterVolumeSpecName: "host") pod "fd5bee4a-5348-44ca-9e43-f6c8511e4775" (UID: "fd5bee4a-5348-44ca-9e43-f6c8511e4775"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.035252 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5bee4a-5348-44ca-9e43-f6c8511e4775-kube-api-access-jkkch" (OuterVolumeSpecName: "kube-api-access-jkkch") pod "fd5bee4a-5348-44ca-9e43-f6c8511e4775" (UID: "fd5bee4a-5348-44ca-9e43-f6c8511e4775"). InnerVolumeSpecName "kube-api-access-jkkch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.087210 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:41:37 crc kubenswrapper[4624]: E0228 04:41:37.087481 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.125962 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkkch\" (UniqueName: \"kubernetes.io/projected/fd5bee4a-5348-44ca-9e43-f6c8511e4775-kube-api-access-jkkch\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.125993 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fd5bee4a-5348-44ca-9e43-f6c8511e4775-host\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.440921 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zrs82/crc-debug-btrp9"] Feb 28 04:41:37 crc kubenswrapper[4624]: E0228 04:41:37.442011 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5bee4a-5348-44ca-9e43-f6c8511e4775" containerName="container-00" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.442038 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5bee4a-5348-44ca-9e43-f6c8511e4775" containerName="container-00" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.442415 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5bee4a-5348-44ca-9e43-f6c8511e4775" containerName="container-00" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.443865 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.637863 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrtfg\" (UniqueName: \"kubernetes.io/projected/0c69906d-b69c-4e2a-bf9f-78007702fc44-kube-api-access-lrtfg\") pod \"crc-debug-btrp9\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.637944 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c69906d-b69c-4e2a-bf9f-78007702fc44-host\") pod \"crc-debug-btrp9\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.739204 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrtfg\" (UniqueName: \"kubernetes.io/projected/0c69906d-b69c-4e2a-bf9f-78007702fc44-kube-api-access-lrtfg\") pod \"crc-debug-btrp9\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.739262 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c69906d-b69c-4e2a-bf9f-78007702fc44-host\") pod \"crc-debug-btrp9\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.739416 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c69906d-b69c-4e2a-bf9f-78007702fc44-host\") pod \"crc-debug-btrp9\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.746193 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b16c7d4848215fde585a0cb01e9d554065af9e780cf2556f289352b78240ba3f" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.746253 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-pcsq5" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.768656 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrtfg\" (UniqueName: \"kubernetes.io/projected/0c69906d-b69c-4e2a-bf9f-78007702fc44-kube-api-access-lrtfg\") pod \"crc-debug-btrp9\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:37 crc kubenswrapper[4624]: I0228 04:41:37.774142 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:38 crc kubenswrapper[4624]: I0228 04:41:38.099411 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5bee4a-5348-44ca-9e43-f6c8511e4775" path="/var/lib/kubelet/pods/fd5bee4a-5348-44ca-9e43-f6c8511e4775/volumes" Feb 28 04:41:38 crc kubenswrapper[4624]: I0228 04:41:38.757578 4624 generic.go:334] "Generic (PLEG): container finished" podID="0c69906d-b69c-4e2a-bf9f-78007702fc44" containerID="e544a3be7790b7d9ebf05b782596af55b3cbe77c9d6c4d321932d1f1c06c9f65" exitCode=0 Feb 28 04:41:38 crc kubenswrapper[4624]: I0228 04:41:38.757635 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-btrp9" event={"ID":"0c69906d-b69c-4e2a-bf9f-78007702fc44","Type":"ContainerDied","Data":"e544a3be7790b7d9ebf05b782596af55b3cbe77c9d6c4d321932d1f1c06c9f65"} Feb 28 04:41:38 crc kubenswrapper[4624]: I0228 04:41:38.757951 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/crc-debug-btrp9" event={"ID":"0c69906d-b69c-4e2a-bf9f-78007702fc44","Type":"ContainerStarted","Data":"c35bd7912f243b4d974ebfc9ee9f35bd396b2598f5bd805e7445ab2f81d17ba7"} Feb 28 04:41:38 crc kubenswrapper[4624]: I0228 04:41:38.806168 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zrs82/crc-debug-btrp9"] Feb 28 04:41:38 crc kubenswrapper[4624]: I0228 04:41:38.817160 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zrs82/crc-debug-btrp9"] Feb 28 04:41:39 crc kubenswrapper[4624]: I0228 04:41:39.864651 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.019058 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c69906d-b69c-4e2a-bf9f-78007702fc44-host\") pod \"0c69906d-b69c-4e2a-bf9f-78007702fc44\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.019138 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c69906d-b69c-4e2a-bf9f-78007702fc44-host" (OuterVolumeSpecName: "host") pod "0c69906d-b69c-4e2a-bf9f-78007702fc44" (UID: "0c69906d-b69c-4e2a-bf9f-78007702fc44"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.019245 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrtfg\" (UniqueName: \"kubernetes.io/projected/0c69906d-b69c-4e2a-bf9f-78007702fc44-kube-api-access-lrtfg\") pod \"0c69906d-b69c-4e2a-bf9f-78007702fc44\" (UID: \"0c69906d-b69c-4e2a-bf9f-78007702fc44\") " Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.019646 4624 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c69906d-b69c-4e2a-bf9f-78007702fc44-host\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.032573 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c69906d-b69c-4e2a-bf9f-78007702fc44-kube-api-access-lrtfg" (OuterVolumeSpecName: "kube-api-access-lrtfg") pod "0c69906d-b69c-4e2a-bf9f-78007702fc44" (UID: "0c69906d-b69c-4e2a-bf9f-78007702fc44"). InnerVolumeSpecName "kube-api-access-lrtfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.109614 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c69906d-b69c-4e2a-bf9f-78007702fc44" path="/var/lib/kubelet/pods/0c69906d-b69c-4e2a-bf9f-78007702fc44/volumes" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.121606 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrtfg\" (UniqueName: \"kubernetes.io/projected/0c69906d-b69c-4e2a-bf9f-78007702fc44-kube-api-access-lrtfg\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.776930 4624 scope.go:117] "RemoveContainer" containerID="e544a3be7790b7d9ebf05b782596af55b3cbe77c9d6c4d321932d1f1c06c9f65" Feb 28 04:41:40 crc kubenswrapper[4624]: I0228 04:41:40.776985 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/crc-debug-btrp9" Feb 28 04:41:49 crc kubenswrapper[4624]: I0228 04:41:49.087201 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:41:49 crc kubenswrapper[4624]: E0228 04:41:49.087845 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.087266 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.171868 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537562-tbwph"] Feb 28 04:42:00 crc kubenswrapper[4624]: E0228 04:42:00.172408 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c69906d-b69c-4e2a-bf9f-78007702fc44" containerName="container-00" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.172427 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c69906d-b69c-4e2a-bf9f-78007702fc44" containerName="container-00" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.172613 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c69906d-b69c-4e2a-bf9f-78007702fc44" containerName="container-00" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.173176 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.176496 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.176685 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.176804 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.182998 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-tbwph"] Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.201399 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjsj\" (UniqueName: \"kubernetes.io/projected/9bd2fdae-47d8-468e-99e0-40a3f203b16b-kube-api-access-8gjsj\") pod \"auto-csr-approver-29537562-tbwph\" (UID: \"9bd2fdae-47d8-468e-99e0-40a3f203b16b\") " pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.303773 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gjsj\" (UniqueName: \"kubernetes.io/projected/9bd2fdae-47d8-468e-99e0-40a3f203b16b-kube-api-access-8gjsj\") pod \"auto-csr-approver-29537562-tbwph\" (UID: \"9bd2fdae-47d8-468e-99e0-40a3f203b16b\") " pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.324628 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gjsj\" (UniqueName: \"kubernetes.io/projected/9bd2fdae-47d8-468e-99e0-40a3f203b16b-kube-api-access-8gjsj\") pod \"auto-csr-approver-29537562-tbwph\" (UID: \"9bd2fdae-47d8-468e-99e0-40a3f203b16b\") " pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.513974 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:00 crc kubenswrapper[4624]: I0228 04:42:00.974287 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"fdfb1cf3f9040f3df4e179148406e99de62a8bafd6e00cd6ed8e57a52338ea14"} Feb 28 04:42:01 crc kubenswrapper[4624]: I0228 04:42:01.049696 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-tbwph"] Feb 28 04:42:01 crc kubenswrapper[4624]: W0228 04:42:01.070672 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bd2fdae_47d8_468e_99e0_40a3f203b16b.slice/crio-5dc8dd66365f5dc799eaed6f54b5daec9c385f72a3ca5627139faa5c15eb7717 WatchSource:0}: Error finding container 5dc8dd66365f5dc799eaed6f54b5daec9c385f72a3ca5627139faa5c15eb7717: Status 404 returned error can't find the container with id 5dc8dd66365f5dc799eaed6f54b5daec9c385f72a3ca5627139faa5c15eb7717 Feb 28 04:42:02 crc kubenswrapper[4624]: I0228 04:42:02.008690 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537562-tbwph" event={"ID":"9bd2fdae-47d8-468e-99e0-40a3f203b16b","Type":"ContainerStarted","Data":"5dc8dd66365f5dc799eaed6f54b5daec9c385f72a3ca5627139faa5c15eb7717"} Feb 28 04:42:03 crc kubenswrapper[4624]: I0228 04:42:03.017683 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537562-tbwph" event={"ID":"9bd2fdae-47d8-468e-99e0-40a3f203b16b","Type":"ContainerDied","Data":"581159b51f85d2e02fd7d8b90f86ad081a01550b6264f8c21ae7e8a3f87c8267"} Feb 28 04:42:03 crc kubenswrapper[4624]: I0228 04:42:03.017546 4624 generic.go:334] "Generic (PLEG): container finished" podID="9bd2fdae-47d8-468e-99e0-40a3f203b16b" containerID="581159b51f85d2e02fd7d8b90f86ad081a01550b6264f8c21ae7e8a3f87c8267" exitCode=0 Feb 28 04:42:04 crc kubenswrapper[4624]: I0228 04:42:04.429228 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:04 crc kubenswrapper[4624]: I0228 04:42:04.495300 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gjsj\" (UniqueName: \"kubernetes.io/projected/9bd2fdae-47d8-468e-99e0-40a3f203b16b-kube-api-access-8gjsj\") pod \"9bd2fdae-47d8-468e-99e0-40a3f203b16b\" (UID: \"9bd2fdae-47d8-468e-99e0-40a3f203b16b\") " Feb 28 04:42:04 crc kubenswrapper[4624]: I0228 04:42:04.505852 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd2fdae-47d8-468e-99e0-40a3f203b16b-kube-api-access-8gjsj" (OuterVolumeSpecName: "kube-api-access-8gjsj") pod "9bd2fdae-47d8-468e-99e0-40a3f203b16b" (UID: "9bd2fdae-47d8-468e-99e0-40a3f203b16b"). InnerVolumeSpecName "kube-api-access-8gjsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:42:04 crc kubenswrapper[4624]: I0228 04:42:04.597059 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gjsj\" (UniqueName: \"kubernetes.io/projected/9bd2fdae-47d8-468e-99e0-40a3f203b16b-kube-api-access-8gjsj\") on node \"crc\" DevicePath \"\"" Feb 28 04:42:05 crc kubenswrapper[4624]: I0228 04:42:05.040946 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537562-tbwph" event={"ID":"9bd2fdae-47d8-468e-99e0-40a3f203b16b","Type":"ContainerDied","Data":"5dc8dd66365f5dc799eaed6f54b5daec9c385f72a3ca5627139faa5c15eb7717"} Feb 28 04:42:05 crc kubenswrapper[4624]: I0228 04:42:05.041257 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc8dd66365f5dc799eaed6f54b5daec9c385f72a3ca5627139faa5c15eb7717" Feb 28 04:42:05 crc kubenswrapper[4624]: I0228 04:42:05.041213 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-tbwph" Feb 28 04:42:05 crc kubenswrapper[4624]: I0228 04:42:05.502316 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-9746t"] Feb 28 04:42:05 crc kubenswrapper[4624]: I0228 04:42:05.512914 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-9746t"] Feb 28 04:42:06 crc kubenswrapper[4624]: I0228 04:42:06.113910 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c4b277-9001-4956-8409-0cf1c869ba60" path="/var/lib/kubelet/pods/68c4b277-9001-4956-8409-0cf1c869ba60/volumes" Feb 28 04:42:25 crc kubenswrapper[4624]: I0228 04:42:25.056192 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-697469cdb8-v44r2_413c221c-acb0-4f2d-9621-b5bd0cdc14a5/barbican-api/0.log" Feb 28 04:42:25 crc kubenswrapper[4624]: I0228 04:42:25.417198 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bc94fcbd6-4dndd_5fbb7219-e74f-4adf-bf31-31794a503f07/barbican-keystone-listener/0.log" Feb 28 04:42:25 crc kubenswrapper[4624]: I0228 04:42:25.677921 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bc94fcbd6-4dndd_5fbb7219-e74f-4adf-bf31-31794a503f07/barbican-keystone-listener-log/0.log" Feb 28 04:42:25 crc kubenswrapper[4624]: I0228 04:42:25.689510 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c4588546c-gkrmm_3189b6cc-a911-48f2-aff9-f41b3313d38a/barbican-worker/0.log" Feb 28 04:42:25 crc kubenswrapper[4624]: I0228 04:42:25.980037 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vbj86_b81c936c-7c68-4155-bee6-b4fab7bc44e8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:25 crc kubenswrapper[4624]: I0228 04:42:25.990654 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c4588546c-gkrmm_3189b6cc-a911-48f2-aff9-f41b3313d38a/barbican-worker-log/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.046874 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-697469cdb8-v44r2_413c221c-acb0-4f2d-9621-b5bd0cdc14a5/barbican-api-log/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.273421 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/ceilometer-notification-agent/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.280185 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/ceilometer-central-agent/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.379735 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/proxy-httpd/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.508777 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_954e38ba-b661-4225-b29e-5c2b4a1b8675/sg-core/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.607730 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1343dbdf-afca-44d9-a8b3-828c71fe25a1/cinder-api-log/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.609839 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1343dbdf-afca-44d9-a8b3-828c71fe25a1/cinder-api/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.848136 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d77d2859-7ba3-4a5a-b2e2-536e824afade/cinder-scheduler/0.log" Feb 28 04:42:26 crc kubenswrapper[4624]: I0228 04:42:26.902845 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d77d2859-7ba3-4a5a-b2e2-536e824afade/probe/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.054312 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qnnrb_88fcba71-7eeb-4780-88f3-3d751230eb2a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.276770 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-gghhz_a2c9b638-8f30-49a3-a818-05bc76a99b30/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.326807 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-zth9s_4b13e83c-72f7-4925-abc6-1e284917cb66/init/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.619503 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jnldp_23fb1205-74ef-497d-bbd0-10fff39c6a4a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.628896 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-zth9s_4b13e83c-72f7-4925-abc6-1e284917cb66/init/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.726606 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56f7ccd8f7-zth9s_4b13e83c-72f7-4925-abc6-1e284917cb66/dnsmasq-dns/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.876067 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_593dc11b-6b54-49d4-b9d9-c233b6ecd3ca/glance-log/0.log" Feb 28 04:42:27 crc kubenswrapper[4624]: I0228 04:42:27.895690 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_593dc11b-6b54-49d4-b9d9-c233b6ecd3ca/glance-httpd/0.log" Feb 28 04:42:28 crc kubenswrapper[4624]: I0228 04:42:28.154603 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a94cc3e1-53ad-429f-b778-ae8941ba8085/glance-httpd/0.log" Feb 28 04:42:28 crc kubenswrapper[4624]: I0228 04:42:28.188062 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a94cc3e1-53ad-429f-b778-ae8941ba8085/glance-log/0.log" Feb 28 04:42:28 crc kubenswrapper[4624]: I0228 04:42:28.444506 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988c5cd-svksm_6ccc2a9a-c3cc-4ddb-a700-86713957337e/horizon/3.log" Feb 28 04:42:28 crc kubenswrapper[4624]: I0228 04:42:28.641885 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988c5cd-svksm_6ccc2a9a-c3cc-4ddb-a700-86713957337e/horizon/2.log" Feb 28 04:42:28 crc kubenswrapper[4624]: I0228 04:42:28.906625 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988c5cd-svksm_6ccc2a9a-c3cc-4ddb-a700-86713957337e/horizon-log/0.log" Feb 28 04:42:29 crc kubenswrapper[4624]: I0228 04:42:29.345605 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-67png_b1de2dd9-0700-4acd-aeeb-4b75a3bcb78b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:29 crc kubenswrapper[4624]: I0228 04:42:29.616058 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c9bfg_9d24e266-6648-42ed-a44e-0b37c5e974a0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:29 crc kubenswrapper[4624]: I0228 04:42:29.883323 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f48865754-rdngs_3eeb3ef4-037f-4755-a2d3-46df6804b116/keystone-api/0.log" Feb 28 04:42:29 crc kubenswrapper[4624]: I0228 04:42:29.928613 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537521-9tntm_95510d44-6b29-45b3-b0a0-4a6ad761fa4e/keystone-cron/0.log" Feb 28 04:42:30 crc kubenswrapper[4624]: I0228 04:42:30.108048 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_60d209e2-524d-40ba-b092-14b4f73dfb71/kube-state-metrics/0.log" Feb 28 04:42:30 crc kubenswrapper[4624]: I0228 04:42:30.299037 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-p42q4_9608e724-9bc7-4040-bfd3-29f159075de8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:30 crc kubenswrapper[4624]: I0228 04:42:30.643514 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4965c79c-gh5mv_c6aa9707-50ce-40f7-a741-9dcfea4b1f8e/neutron-api/0.log" Feb 28 04:42:30 crc kubenswrapper[4624]: I0228 04:42:30.683887 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b4965c79c-gh5mv_c6aa9707-50ce-40f7-a741-9dcfea4b1f8e/neutron-httpd/0.log" Feb 28 04:42:30 crc kubenswrapper[4624]: I0228 04:42:30.852622 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkj89_bef97704-39c1-4a26-b58f-90b76510822c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:31 crc kubenswrapper[4624]: I0228 04:42:31.282139 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_78266014-e4d1-459b-b48f-a8b21a17cce3/nova-api-log/0.log" Feb 28 04:42:31 crc kubenswrapper[4624]: I0228 04:42:31.449562 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6d7f478f-8240-4fb8-8cfe-5b2e16c55b21/nova-cell0-conductor-conductor/0.log" Feb 28 04:42:31 crc kubenswrapper[4624]: I0228 04:42:31.705631 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_78266014-e4d1-459b-b48f-a8b21a17cce3/nova-api-api/0.log" Feb 28 04:42:31 crc kubenswrapper[4624]: I0228 04:42:31.714319 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cff7ee8c-629b-43aa-a39b-1b2282c58d2b/nova-cell1-conductor-conductor/0.log" Feb 28 04:42:31 crc kubenswrapper[4624]: I0228 04:42:31.844822 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4b31404c-f19e-465d-9acb-3a314299ad57/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 04:42:32 crc kubenswrapper[4624]: I0228 04:42:32.233239 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-fr4lw_15a08883-796d-49b7-a003-66cb6cc51189/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:32 crc kubenswrapper[4624]: I0228 04:42:32.271095 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_84e801c6-735b-4858-81d4-2dac7c9eba75/nova-metadata-log/0.log" Feb 28 04:42:32 crc kubenswrapper[4624]: I0228 04:42:32.691988 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db8c8413-e456-4f82-9947-7d37578d237f/mysql-bootstrap/0.log" Feb 28 04:42:32 crc kubenswrapper[4624]: I0228 04:42:32.703560 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_82fbc2f3-5eb4-4b43-a7c7-8c93c9fdf2e6/nova-scheduler-scheduler/0.log" Feb 28 04:42:32 crc kubenswrapper[4624]: I0228 04:42:32.896222 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db8c8413-e456-4f82-9947-7d37578d237f/mysql-bootstrap/0.log" Feb 28 04:42:32 crc kubenswrapper[4624]: I0228 04:42:32.946157 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db8c8413-e456-4f82-9947-7d37578d237f/galera/0.log" Feb 28 04:42:33 crc kubenswrapper[4624]: I0228 04:42:33.209030 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c9c8d03c-80e2-42fc-a320-8175c10a59c4/mysql-bootstrap/0.log" Feb 28 04:42:33 crc kubenswrapper[4624]: I0228 04:42:33.378552 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c9c8d03c-80e2-42fc-a320-8175c10a59c4/galera/0.log" Feb 28 04:42:33 crc kubenswrapper[4624]: I0228 04:42:33.499170 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c9c8d03c-80e2-42fc-a320-8175c10a59c4/mysql-bootstrap/0.log" Feb 28 04:42:33 crc kubenswrapper[4624]: I0228 04:42:33.629520 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_fa3966ee-e42d-4dfe-a730-978481d7f497/openstackclient/0.log" Feb 28 04:42:33 crc kubenswrapper[4624]: I0228 04:42:33.748853 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b9hfd_34bc3551-9974-4754-b285-e61f586a0b18/openstack-network-exporter/0.log" Feb 28 04:42:33 crc kubenswrapper[4624]: I0228 04:42:33.758934 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_84e801c6-735b-4858-81d4-2dac7c9eba75/nova-metadata-metadata/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.018371 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovsdb-server-init/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.206459 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovsdb-server-init/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.268317 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovsdb-server/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.339993 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f76ww_25ca2d4f-2528-442c-bfdb-7eab683203e4/ovs-vswitchd/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.470926 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-phft7_6da0269d-5fc3-487a-a49a-fa87c07af687/ovn-controller/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.660454 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ks9tq_e0224a59-2832-42cb-91f3-e0f12db48a81/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.755731 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5e11975f-5910-43a1-91ed-2633d3576fce/openstack-network-exporter/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.837277 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5e11975f-5910-43a1-91ed-2633d3576fce/ovn-northd/0.log" Feb 28 04:42:34 crc kubenswrapper[4624]: I0228 04:42:34.982927 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1463f48e-4ada-4214-b4cf-520088ae4fe4/openstack-network-exporter/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.043031 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1463f48e-4ada-4214-b4cf-520088ae4fe4/ovsdbserver-nb/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.254915 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c/ovsdbserver-sb/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.293579 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae7b1870-f3ae-4f47-84a3-f4cec0f2a70c/openstack-network-exporter/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.480387 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86b4894974-wxqfg_0391f882-2f7a-47e9-b4f2-b640e146e079/placement-api/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.570654 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_00b8a2b6-f64c-452c-ac93-00422b339f64/setup-container/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.627008 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86b4894974-wxqfg_0391f882-2f7a-47e9-b4f2-b640e146e079/placement-log/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.884818 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_00b8a2b6-f64c-452c-ac93-00422b339f64/setup-container/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.942918 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_00b8a2b6-f64c-452c-ac93-00422b339f64/rabbitmq/0.log" Feb 28 04:42:35 crc kubenswrapper[4624]: I0228 04:42:35.961231 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_03d202d9-cd01-4f0c-b7dc-9e89a7676c65/setup-container/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.239439 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_03d202d9-cd01-4f0c-b7dc-9e89a7676c65/rabbitmq/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.274435 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_03d202d9-cd01-4f0c-b7dc-9e89a7676c65/setup-container/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.343546 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jqvc6_0bcd1ce2-be32-4778-aced-701605c2cc28/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.509814 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tb8rg_985adc96-94ed-4823-a477-f222def355a1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.656895 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ltc7p_435d0cd8-63cc-4a9d-a82b-0aa9d3ff4c51/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.861222 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nqpfh_7c73aa0b-4045-4181-849d-8e7a631cdb87/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:36 crc kubenswrapper[4624]: I0228 04:42:36.972260 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7qc65_077546b4-fddd-40c3-866a-714afa3a4f2f/ssh-known-hosts-edpm-deployment/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.241001 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-775c6bbdc-lvbk6_7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41/proxy-httpd/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.258814 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-775c6bbdc-lvbk6_7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41/proxy-server/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.294706 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gfd7z_41ef9c88-060f-43ed-8fbf-cc7a7c6f0f33/swift-ring-rebalance/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.540530 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-auditor/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.588884 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-reaper/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.639245 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-replicator/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.817873 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/account-server/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.860888 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-auditor/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.889450 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-replicator/0.log" Feb 28 04:42:37 crc kubenswrapper[4624]: I0228 04:42:37.970996 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-server/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.108166 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/container-updater/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.143140 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-auditor/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.177911 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-expirer/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.331701 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-replicator/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.410499 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-updater/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.429278 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/rsync/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.429679 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/object-server/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.664490 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08cf446c-fcb0-4f4a-af81-0f64d52669e8/swift-recon-cron/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.889953 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7s7vq_2588d2da-daa4-4eb7-b706-25290e0840c7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.974601 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8687164b-ff55-49e1-ae97-79d38c05f861/tempest-tests-tempest-tests-runner/0.log" Feb 28 04:42:38 crc kubenswrapper[4624]: I0228 04:42:38.986542 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b5bf464d-f307-4f8b-be8c-cfe363cc6daa/test-operator-logs-container/0.log" Feb 28 04:42:39 crc kubenswrapper[4624]: I0228 04:42:39.325008 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-92tp2_b801953f-c310-4623-ad3e-69dc84bc9a34/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 04:42:51 crc kubenswrapper[4624]: I0228 04:42:51.846341 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_81d248fe-a92f-469e-8283-3fd135198c65/memcached/0.log" Feb 28 04:43:04 crc kubenswrapper[4624]: I0228 04:43:04.200071 4624 scope.go:117] "RemoveContainer" containerID="b339232fb963690aa772c2594e3248170c0d1fe3a1dd0bc7b6381d937d831f76" Feb 28 04:43:09 crc kubenswrapper[4624]: I0228 04:43:09.947321 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/util/0.log" Feb 28 04:43:10 crc kubenswrapper[4624]: I0228 04:43:10.208458 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/pull/0.log" Feb 28 04:43:10 crc kubenswrapper[4624]: I0228 04:43:10.273619 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/util/0.log" Feb 28 04:43:10 crc kubenswrapper[4624]: I0228 04:43:10.329380 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/pull/0.log" Feb 28 04:43:10 crc kubenswrapper[4624]: I0228 04:43:10.549210 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/pull/0.log" Feb 28 04:43:10 crc kubenswrapper[4624]: I0228 04:43:10.552664 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/extract/0.log" Feb 28 04:43:10 crc kubenswrapper[4624]: I0228 04:43:10.600349 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bfbf5a283dde9023742228cc48f83e8180a7096b93f3cb66edc9257b9a979h2_7ddaf0c6-c923-45ba-ad47-fcfd5a96e347/util/0.log" Feb 28 04:43:11 crc kubenswrapper[4624]: I0228 04:43:11.142132 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-5rdgn_f62e258f-732a-4da1-8670-475725509310/manager/0.log" Feb 28 04:43:11 crc kubenswrapper[4624]: I0228 04:43:11.549275 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7f748f8b74-vdhdb_a5b6c7a0-a640-4faa-836c-7c5d0c29acd9/manager/0.log" Feb 28 04:43:11 crc kubenswrapper[4624]: I0228 04:43:11.718592 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-585b788787-b97cm_8507d808-bdf4-47f7-adb9-e3746c4768bf/manager/0.log" Feb 28 04:43:11 crc kubenswrapper[4624]: I0228 04:43:11.963325 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7db95d7ffb-k68gx_5013f14b-e7ba-400b-8a1e-d187991a0e49/manager/0.log" Feb 28 04:43:12 crc kubenswrapper[4624]: I0228 04:43:12.556260 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-8784b4656-8vq2d_2ef92f5a-9f82-40fb-81a2-c4a75aec60cf/manager/0.log" Feb 28 04:43:12 crc kubenswrapper[4624]: I0228 04:43:12.734598 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-c77466965-f8x9g_a96b8e7a-1320-4ede-9f43-ec80e2d562c9/manager/0.log" Feb 28 04:43:13 crc kubenswrapper[4624]: I0228 04:43:13.085642 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78b64779b9-rvz6s_84190c06-4523-4d3d-ab8c-cec0aca7c393/manager/0.log" Feb 28 04:43:13 crc kubenswrapper[4624]: I0228 04:43:13.224609 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-768c8b45bb-gkxmg_50df7aca-97ff-41dc-92cc-143cb02acea8/manager/0.log" Feb 28 04:43:13 crc kubenswrapper[4624]: I0228 04:43:13.362471 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76fd76856-knmpc_3e549f4d-18e0-49cf-a82e-efde664ab810/manager/0.log" Feb 28 04:43:13 crc kubenswrapper[4624]: I0228 04:43:13.532580 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-745fc45789-tvr7t_f3c08c1c-5646-48e9-9c9a-537b7619ecb0/manager/0.log" Feb 28 04:43:13 crc kubenswrapper[4624]: I0228 04:43:13.965039 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-768f998cf4-dv9vf_6497616d-eb08-4bd4-b3a0-8ee000cdfe47/manager/0.log" Feb 28 04:43:14 crc kubenswrapper[4624]: I0228 04:43:14.135247 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c67ff7674-psffj_797119fd-2208-40d7-86c8-594e59529182/manager/0.log" Feb 28 04:43:14 crc kubenswrapper[4624]: I0228 04:43:14.279898 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-cc79fdffd-xw2s7_8cb779fb-ff77-468c-9198-065b3e4bf393/manager/0.log" Feb 28 04:43:14 crc kubenswrapper[4624]: I0228 04:43:14.498312 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-c7v67_9776c87a-53fb-404c-8bbe-0fbeb07eda0d/manager/0.log" Feb 28 04:43:14 crc kubenswrapper[4624]: I0228 04:43:14.946722 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-596b9db54c-pdc78_58d2ada8-fb04-4054-bba9-e2742bddbce5/operator/0.log" Feb 28 04:43:14 crc kubenswrapper[4624]: I0228 04:43:14.972492 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6r9np_5db2fa57-c7df-4bb4-ba80-5aa1a8ee08be/registry-server/0.log" Feb 28 04:43:15 crc kubenswrapper[4624]: I0228 04:43:15.375706 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-684c7d77b-c6gww_c6a151b1-0add-4b07-aa32-9a9e0dc2f526/manager/0.log" Feb 28 04:43:15 crc kubenswrapper[4624]: I0228 04:43:15.476109 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-bff955cc4-x8vll_582d0963-7f3a-4664-85e4-9148c495eb1a/manager/0.log" Feb 28 04:43:15 crc kubenswrapper[4624]: I0228 04:43:15.611964 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7d47r_154dfd82-a449-4812-bdd5-3e9c8a474b3d/operator/0.log" Feb 28 04:43:15 crc kubenswrapper[4624]: I0228 04:43:15.764347 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-55f4bf89cb-54l7x_b26b01f4-0d96-4a5b-bb71-58d691b92119/manager/0.log" Feb 28 04:43:16 crc kubenswrapper[4624]: I0228 04:43:16.056532 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-tgr2z_d6722506-d5dd-4fb4-b81a-d27c5dab59dd/manager/0.log" Feb 28 04:43:16 crc kubenswrapper[4624]: I0228 04:43:16.252986 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-lfkwt_39ee7326-c4c7-4dee-a749-35da4ff62746/manager/0.log" Feb 28 04:43:16 crc kubenswrapper[4624]: I0228 04:43:16.309756 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-65c9f4f6b-7w84p_195b486b-92db-481a-9478-7a3edfeb79ae/manager/0.log" Feb 28 04:43:16 crc kubenswrapper[4624]: I0228 04:43:16.558822 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c5dbcf94c-psgpc_61521bf4-1381-4fe8-a9d3-0948ebaa1ca6/manager/0.log" Feb 28 04:43:19 crc kubenswrapper[4624]: I0228 04:43:19.174741 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-999d845f-jrsj4_4833820c-a44e-4eb4-8716-bab85def7811/manager/0.log" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.559578 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m8sxl"] Feb 28 04:43:30 crc kubenswrapper[4624]: E0228 04:43:30.560620 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd2fdae-47d8-468e-99e0-40a3f203b16b" containerName="oc" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.560638 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd2fdae-47d8-468e-99e0-40a3f203b16b" containerName="oc" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.560836 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd2fdae-47d8-468e-99e0-40a3f203b16b" containerName="oc" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.562189 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.575517 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m8sxl"] Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.625865 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-utilities\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.625902 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrgtc\" (UniqueName: \"kubernetes.io/projected/4aa5247c-a05d-456e-9fa8-fb082682e9ae-kube-api-access-zrgtc\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.626167 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-catalog-content\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.733290 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-catalog-content\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.734326 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-utilities\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.734360 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrgtc\" (UniqueName: \"kubernetes.io/projected/4aa5247c-a05d-456e-9fa8-fb082682e9ae-kube-api-access-zrgtc\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.735072 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-catalog-content\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.740420 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-utilities\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.762989 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrgtc\" (UniqueName: \"kubernetes.io/projected/4aa5247c-a05d-456e-9fa8-fb082682e9ae-kube-api-access-zrgtc\") pod \"certified-operators-m8sxl\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:30 crc kubenswrapper[4624]: I0228 04:43:30.882032 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:31 crc kubenswrapper[4624]: I0228 04:43:31.501934 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m8sxl"] Feb 28 04:43:31 crc kubenswrapper[4624]: I0228 04:43:31.892537 4624 generic.go:334] "Generic (PLEG): container finished" podID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerID="23adc4493371fcb5b191cad664e8d133931655d65c72ddd1b7b1db3eb33886f1" exitCode=0 Feb 28 04:43:31 crc kubenswrapper[4624]: I0228 04:43:31.892753 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerDied","Data":"23adc4493371fcb5b191cad664e8d133931655d65c72ddd1b7b1db3eb33886f1"} Feb 28 04:43:31 crc kubenswrapper[4624]: I0228 04:43:31.892780 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerStarted","Data":"5699f2e0572dd599b85aa2272e31a6a1b92bf13e6b30292561dccc093ddcd7b9"} Feb 28 04:43:32 crc kubenswrapper[4624]: I0228 04:43:32.904480 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerStarted","Data":"8ecca4f7632b0bbd1fbd0aec8b0b91791ce2f9c1385d8e35ea04c012db30f71a"} Feb 28 04:43:35 crc kubenswrapper[4624]: I0228 04:43:35.939046 4624 generic.go:334] "Generic (PLEG): container finished" podID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerID="8ecca4f7632b0bbd1fbd0aec8b0b91791ce2f9c1385d8e35ea04c012db30f71a" exitCode=0 Feb 28 04:43:35 crc kubenswrapper[4624]: I0228 04:43:35.939232 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerDied","Data":"8ecca4f7632b0bbd1fbd0aec8b0b91791ce2f9c1385d8e35ea04c012db30f71a"} Feb 28 04:43:36 crc kubenswrapper[4624]: I0228 04:43:36.951446 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerStarted","Data":"9c1b17f7115104fc2392d3e12e956c97e3950731a9a7d7e7f3bbced9ef6e8751"} Feb 28 04:43:36 crc kubenswrapper[4624]: I0228 04:43:36.973259 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m8sxl" podStartSLOduration=2.411375113 podStartE2EDuration="6.97323889s" podCreationTimestamp="2026-02-28 04:43:30 +0000 UTC" firstStartedPulling="2026-02-28 04:43:31.893943004 +0000 UTC m=+4066.557982313" lastFinishedPulling="2026-02-28 04:43:36.455806771 +0000 UTC m=+4071.119846090" observedRunningTime="2026-02-28 04:43:36.970181318 +0000 UTC m=+4071.634220627" watchObservedRunningTime="2026-02-28 04:43:36.97323889 +0000 UTC m=+4071.637278199" Feb 28 04:43:40 crc kubenswrapper[4624]: I0228 04:43:40.883096 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:40 crc kubenswrapper[4624]: I0228 04:43:40.883663 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:41 crc kubenswrapper[4624]: I0228 04:43:41.947682 4624 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m8sxl" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="registry-server" probeResult="failure" output=< Feb 28 04:43:41 crc kubenswrapper[4624]: timeout: failed to connect service ":50051" within 1s Feb 28 04:43:41 crc kubenswrapper[4624]: > Feb 28 04:43:42 crc kubenswrapper[4624]: I0228 04:43:42.099925 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-m4qb9_d3ac81ca-3efe-4112-a8d0-9503bd1826b7/control-plane-machine-set-operator/0.log" Feb 28 04:43:42 crc kubenswrapper[4624]: I0228 04:43:42.234618 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nhzzm_b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d/kube-rbac-proxy/0.log" Feb 28 04:43:42 crc kubenswrapper[4624]: I0228 04:43:42.323699 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nhzzm_b8bfea4a-e555-41fd-8e7c-1f5d1ae0bf4d/machine-api-operator/0.log" Feb 28 04:43:50 crc kubenswrapper[4624]: I0228 04:43:50.955779 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:51 crc kubenswrapper[4624]: I0228 04:43:51.016538 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:51 crc kubenswrapper[4624]: I0228 04:43:51.199745 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m8sxl"] Feb 28 04:43:52 crc kubenswrapper[4624]: I0228 04:43:52.095064 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m8sxl" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="registry-server" containerID="cri-o://9c1b17f7115104fc2392d3e12e956c97e3950731a9a7d7e7f3bbced9ef6e8751" gracePeriod=2 Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.108571 4624 generic.go:334] "Generic (PLEG): container finished" podID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerID="9c1b17f7115104fc2392d3e12e956c97e3950731a9a7d7e7f3bbced9ef6e8751" exitCode=0 Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.108657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerDied","Data":"9c1b17f7115104fc2392d3e12e956c97e3950731a9a7d7e7f3bbced9ef6e8751"} Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.108962 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sxl" event={"ID":"4aa5247c-a05d-456e-9fa8-fb082682e9ae","Type":"ContainerDied","Data":"5699f2e0572dd599b85aa2272e31a6a1b92bf13e6b30292561dccc093ddcd7b9"} Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.108985 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5699f2e0572dd599b85aa2272e31a6a1b92bf13e6b30292561dccc093ddcd7b9" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.300325 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.356430 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrgtc\" (UniqueName: \"kubernetes.io/projected/4aa5247c-a05d-456e-9fa8-fb082682e9ae-kube-api-access-zrgtc\") pod \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.356600 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-utilities\") pod \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.356797 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-catalog-content\") pod \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\" (UID: \"4aa5247c-a05d-456e-9fa8-fb082682e9ae\") " Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.357745 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-utilities" (OuterVolumeSpecName: "utilities") pod "4aa5247c-a05d-456e-9fa8-fb082682e9ae" (UID: "4aa5247c-a05d-456e-9fa8-fb082682e9ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.365377 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa5247c-a05d-456e-9fa8-fb082682e9ae-kube-api-access-zrgtc" (OuterVolumeSpecName: "kube-api-access-zrgtc") pod "4aa5247c-a05d-456e-9fa8-fb082682e9ae" (UID: "4aa5247c-a05d-456e-9fa8-fb082682e9ae"). InnerVolumeSpecName "kube-api-access-zrgtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.409242 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aa5247c-a05d-456e-9fa8-fb082682e9ae" (UID: "4aa5247c-a05d-456e-9fa8-fb082682e9ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.458969 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.459002 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa5247c-a05d-456e-9fa8-fb082682e9ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:43:53 crc kubenswrapper[4624]: I0228 04:43:53.459017 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrgtc\" (UniqueName: \"kubernetes.io/projected/4aa5247c-a05d-456e-9fa8-fb082682e9ae-kube-api-access-zrgtc\") on node \"crc\" DevicePath \"\"" Feb 28 04:43:54 crc kubenswrapper[4624]: I0228 04:43:54.120894 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sxl" Feb 28 04:43:54 crc kubenswrapper[4624]: I0228 04:43:54.157804 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m8sxl"] Feb 28 04:43:54 crc kubenswrapper[4624]: I0228 04:43:54.164874 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m8sxl"] Feb 28 04:43:56 crc kubenswrapper[4624]: I0228 04:43:56.108989 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" path="/var/lib/kubelet/pods/4aa5247c-a05d-456e-9fa8-fb082682e9ae/volumes" Feb 28 04:43:56 crc kubenswrapper[4624]: I0228 04:43:56.462332 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pbztg_c3c9f58c-1f61-4731-b062-8bc0f3044e68/cert-manager-controller/0.log" Feb 28 04:43:56 crc kubenswrapper[4624]: I0228 04:43:56.653665 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q7sw6_4ca9316a-d88d-402c-a943-f858bc793848/cert-manager-webhook/0.log" Feb 28 04:43:56 crc kubenswrapper[4624]: I0228 04:43:56.655035 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-t4xf2_18477b71-69e7-4103-949d-4c377e3f9246/cert-manager-cainjector/0.log" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.146103 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537564-9ljtx"] Feb 28 04:44:00 crc kubenswrapper[4624]: E0228 04:44:00.146979 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="registry-server" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.146996 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="registry-server" Feb 28 04:44:00 crc kubenswrapper[4624]: E0228 04:44:00.147015 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="extract-content" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.147020 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="extract-content" Feb 28 04:44:00 crc kubenswrapper[4624]: E0228 04:44:00.147049 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="extract-utilities" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.147056 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="extract-utilities" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.147260 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa5247c-a05d-456e-9fa8-fb082682e9ae" containerName="registry-server" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.147903 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.151022 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.151643 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.155830 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-9ljtx"] Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.156674 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.184626 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxd68\" (UniqueName: \"kubernetes.io/projected/16f4cff9-fdec-403a-a713-835337213c80-kube-api-access-gxd68\") pod \"auto-csr-approver-29537564-9ljtx\" (UID: \"16f4cff9-fdec-403a-a713-835337213c80\") " pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.286657 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxd68\" (UniqueName: \"kubernetes.io/projected/16f4cff9-fdec-403a-a713-835337213c80-kube-api-access-gxd68\") pod \"auto-csr-approver-29537564-9ljtx\" (UID: \"16f4cff9-fdec-403a-a713-835337213c80\") " pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.311708 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxd68\" (UniqueName: \"kubernetes.io/projected/16f4cff9-fdec-403a-a713-835337213c80-kube-api-access-gxd68\") pod \"auto-csr-approver-29537564-9ljtx\" (UID: \"16f4cff9-fdec-403a-a713-835337213c80\") " pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.465079 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:00 crc kubenswrapper[4624]: I0228 04:44:00.941366 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-9ljtx"] Feb 28 04:44:00 crc kubenswrapper[4624]: W0228 04:44:00.946810 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f4cff9_fdec_403a_a713_835337213c80.slice/crio-c92a41d3b23b27d31dad7fd35ae9e3229cbe52ba23056e181cb41a1fa9663041 WatchSource:0}: Error finding container c92a41d3b23b27d31dad7fd35ae9e3229cbe52ba23056e181cb41a1fa9663041: Status 404 returned error can't find the container with id c92a41d3b23b27d31dad7fd35ae9e3229cbe52ba23056e181cb41a1fa9663041 Feb 28 04:44:01 crc kubenswrapper[4624]: I0228 04:44:01.179774 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" event={"ID":"16f4cff9-fdec-403a-a713-835337213c80","Type":"ContainerStarted","Data":"c92a41d3b23b27d31dad7fd35ae9e3229cbe52ba23056e181cb41a1fa9663041"} Feb 28 04:44:03 crc kubenswrapper[4624]: I0228 04:44:03.211027 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" event={"ID":"16f4cff9-fdec-403a-a713-835337213c80","Type":"ContainerStarted","Data":"8040674f1ccf3028190d85e9c0ff1573f5594697fd07556cdbaf7423137c8bff"} Feb 28 04:44:03 crc kubenswrapper[4624]: I0228 04:44:03.244065 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" podStartSLOduration=2.154450643 podStartE2EDuration="3.244039853s" podCreationTimestamp="2026-02-28 04:44:00 +0000 UTC" firstStartedPulling="2026-02-28 04:44:00.949286842 +0000 UTC m=+4095.613326151" lastFinishedPulling="2026-02-28 04:44:02.038876042 +0000 UTC m=+4096.702915361" observedRunningTime="2026-02-28 04:44:03.234156756 +0000 UTC m=+4097.898196075" watchObservedRunningTime="2026-02-28 04:44:03.244039853 +0000 UTC m=+4097.908079162" Feb 28 04:44:04 crc kubenswrapper[4624]: I0228 04:44:04.221434 4624 generic.go:334] "Generic (PLEG): container finished" podID="16f4cff9-fdec-403a-a713-835337213c80" containerID="8040674f1ccf3028190d85e9c0ff1573f5594697fd07556cdbaf7423137c8bff" exitCode=0 Feb 28 04:44:04 crc kubenswrapper[4624]: I0228 04:44:04.221485 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" event={"ID":"16f4cff9-fdec-403a-a713-835337213c80","Type":"ContainerDied","Data":"8040674f1ccf3028190d85e9c0ff1573f5594697fd07556cdbaf7423137c8bff"} Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.563730 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.589699 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxd68\" (UniqueName: \"kubernetes.io/projected/16f4cff9-fdec-403a-a713-835337213c80-kube-api-access-gxd68\") pod \"16f4cff9-fdec-403a-a713-835337213c80\" (UID: \"16f4cff9-fdec-403a-a713-835337213c80\") " Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.600888 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f4cff9-fdec-403a-a713-835337213c80-kube-api-access-gxd68" (OuterVolumeSpecName: "kube-api-access-gxd68") pod "16f4cff9-fdec-403a-a713-835337213c80" (UID: "16f4cff9-fdec-403a-a713-835337213c80"). InnerVolumeSpecName "kube-api-access-gxd68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.637884 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pj8hb"] Feb 28 04:44:05 crc kubenswrapper[4624]: E0228 04:44:05.638333 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f4cff9-fdec-403a-a713-835337213c80" containerName="oc" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.638357 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f4cff9-fdec-403a-a713-835337213c80" containerName="oc" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.638573 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f4cff9-fdec-403a-a713-835337213c80" containerName="oc" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.640012 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.672435 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pj8hb"] Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.691487 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjwl\" (UniqueName: \"kubernetes.io/projected/ca354bd8-8d08-42b9-af85-20706f0792bd-kube-api-access-qwjwl\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.691578 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-utilities\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.691604 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-catalog-content\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.691669 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxd68\" (UniqueName: \"kubernetes.io/projected/16f4cff9-fdec-403a-a713-835337213c80-kube-api-access-gxd68\") on node \"crc\" DevicePath \"\"" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.793588 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-utilities\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.793643 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-catalog-content\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.793754 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwjwl\" (UniqueName: \"kubernetes.io/projected/ca354bd8-8d08-42b9-af85-20706f0792bd-kube-api-access-qwjwl\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.794161 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-utilities\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.794344 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-catalog-content\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.811793 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwjwl\" (UniqueName: \"kubernetes.io/projected/ca354bd8-8d08-42b9-af85-20706f0792bd-kube-api-access-qwjwl\") pod \"redhat-operators-pj8hb\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:05 crc kubenswrapper[4624]: I0228 04:44:05.988743 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:06 crc kubenswrapper[4624]: I0228 04:44:06.241605 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" event={"ID":"16f4cff9-fdec-403a-a713-835337213c80","Type":"ContainerDied","Data":"c92a41d3b23b27d31dad7fd35ae9e3229cbe52ba23056e181cb41a1fa9663041"} Feb 28 04:44:06 crc kubenswrapper[4624]: I0228 04:44:06.241826 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92a41d3b23b27d31dad7fd35ae9e3229cbe52ba23056e181cb41a1fa9663041" Feb 28 04:44:06 crc kubenswrapper[4624]: I0228 04:44:06.241666 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-9ljtx" Feb 28 04:44:06 crc kubenswrapper[4624]: I0228 04:44:06.635490 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-6x88z"] Feb 28 04:44:06 crc kubenswrapper[4624]: I0228 04:44:06.642224 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-6x88z"] Feb 28 04:44:08 crc kubenswrapper[4624]: I0228 04:44:08.104037 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86262e59-aa73-4e1c-bfcb-61a9df1886b3" path="/var/lib/kubelet/pods/86262e59-aa73-4e1c-bfcb-61a9df1886b3/volumes" Feb 28 04:44:09 crc kubenswrapper[4624]: I0228 04:44:09.455851 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pj8hb"] Feb 28 04:44:10 crc kubenswrapper[4624]: I0228 04:44:10.277074 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerID="f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac" exitCode=0 Feb 28 04:44:10 crc kubenswrapper[4624]: I0228 04:44:10.277234 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerDied","Data":"f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac"} Feb 28 04:44:10 crc kubenswrapper[4624]: I0228 04:44:10.279274 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerStarted","Data":"daa1338290652ab0eaa507ea1c7f3a319e6e1ff6b54be5555af7297d3c8169e2"} Feb 28 04:44:10 crc kubenswrapper[4624]: I0228 04:44:10.977134 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-mnlv7_04995dc6-8837-4a1f-91df-bc058d0fb961/nmstate-console-plugin/0.log" Feb 28 04:44:11 crc kubenswrapper[4624]: I0228 04:44:11.290222 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerStarted","Data":"73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14"} Feb 28 04:44:11 crc kubenswrapper[4624]: I0228 04:44:11.486072 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-75frj_3abaedfc-0055-4d3d-a10c-0adf10cf8f52/nmstate-handler/0.log" Feb 28 04:44:11 crc kubenswrapper[4624]: I0228 04:44:11.505695 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-pd9k9_66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a/kube-rbac-proxy/0.log" Feb 28 04:44:11 crc kubenswrapper[4624]: I0228 04:44:11.610612 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-pd9k9_66ef0815-1c21-4b36-8e9e-b18d0fcc8d4a/nmstate-metrics/0.log" Feb 28 04:44:11 crc kubenswrapper[4624]: I0228 04:44:11.748134 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-wtfks_54bc88fe-b7dc-43a1-b64b-60723eb0cf7c/nmstate-operator/0.log" Feb 28 04:44:11 crc kubenswrapper[4624]: I0228 04:44:11.891262 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-ctxdh_2f662548-c391-4399-adba-8fa556360cf8/nmstate-webhook/0.log" Feb 28 04:44:17 crc kubenswrapper[4624]: I0228 04:44:17.345367 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerID="73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14" exitCode=0 Feb 28 04:44:17 crc kubenswrapper[4624]: I0228 04:44:17.345498 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerDied","Data":"73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14"} Feb 28 04:44:18 crc kubenswrapper[4624]: I0228 04:44:18.357818 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerStarted","Data":"6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833"} Feb 28 04:44:18 crc kubenswrapper[4624]: I0228 04:44:18.384632 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pj8hb" podStartSLOduration=5.885813177 podStartE2EDuration="13.384610035s" podCreationTimestamp="2026-02-28 04:44:05 +0000 UTC" firstStartedPulling="2026-02-28 04:44:10.278934299 +0000 UTC m=+4104.942973618" lastFinishedPulling="2026-02-28 04:44:17.777731157 +0000 UTC m=+4112.441770476" observedRunningTime="2026-02-28 04:44:18.375552799 +0000 UTC m=+4113.039592108" watchObservedRunningTime="2026-02-28 04:44:18.384610035 +0000 UTC m=+4113.048649354" Feb 28 04:44:19 crc kubenswrapper[4624]: I0228 04:44:19.539632 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:44:19 crc kubenswrapper[4624]: I0228 04:44:19.539698 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:44:25 crc kubenswrapper[4624]: I0228 04:44:25.989810 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:25 crc kubenswrapper[4624]: I0228 04:44:25.990481 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:26 crc kubenswrapper[4624]: I0228 04:44:26.043019 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:26 crc kubenswrapper[4624]: I0228 04:44:26.492951 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:26 crc kubenswrapper[4624]: I0228 04:44:26.546063 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pj8hb"] Feb 28 04:44:28 crc kubenswrapper[4624]: I0228 04:44:28.462467 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pj8hb" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="registry-server" containerID="cri-o://6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833" gracePeriod=2 Feb 28 04:44:28 crc kubenswrapper[4624]: I0228 04:44:28.930271 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.053026 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-catalog-content\") pod \"ca354bd8-8d08-42b9-af85-20706f0792bd\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.053167 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwjwl\" (UniqueName: \"kubernetes.io/projected/ca354bd8-8d08-42b9-af85-20706f0792bd-kube-api-access-qwjwl\") pod \"ca354bd8-8d08-42b9-af85-20706f0792bd\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.053247 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-utilities\") pod \"ca354bd8-8d08-42b9-af85-20706f0792bd\" (UID: \"ca354bd8-8d08-42b9-af85-20706f0792bd\") " Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.054429 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-utilities" (OuterVolumeSpecName: "utilities") pod "ca354bd8-8d08-42b9-af85-20706f0792bd" (UID: "ca354bd8-8d08-42b9-af85-20706f0792bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.059907 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca354bd8-8d08-42b9-af85-20706f0792bd-kube-api-access-qwjwl" (OuterVolumeSpecName: "kube-api-access-qwjwl") pod "ca354bd8-8d08-42b9-af85-20706f0792bd" (UID: "ca354bd8-8d08-42b9-af85-20706f0792bd"). InnerVolumeSpecName "kube-api-access-qwjwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.155902 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwjwl\" (UniqueName: \"kubernetes.io/projected/ca354bd8-8d08-42b9-af85-20706f0792bd-kube-api-access-qwjwl\") on node \"crc\" DevicePath \"\"" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.155945 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.190674 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca354bd8-8d08-42b9-af85-20706f0792bd" (UID: "ca354bd8-8d08-42b9-af85-20706f0792bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.259486 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca354bd8-8d08-42b9-af85-20706f0792bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.474505 4624 generic.go:334] "Generic (PLEG): container finished" podID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerID="6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833" exitCode=0 Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.474597 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pj8hb" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.474593 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerDied","Data":"6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833"} Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.476596 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pj8hb" event={"ID":"ca354bd8-8d08-42b9-af85-20706f0792bd","Type":"ContainerDied","Data":"daa1338290652ab0eaa507ea1c7f3a319e6e1ff6b54be5555af7297d3c8169e2"} Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.476626 4624 scope.go:117] "RemoveContainer" containerID="6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.504137 4624 scope.go:117] "RemoveContainer" containerID="73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.522801 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pj8hb"] Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.532157 4624 scope.go:117] "RemoveContainer" containerID="f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.536978 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pj8hb"] Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.569222 4624 scope.go:117] "RemoveContainer" containerID="6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833" Feb 28 04:44:29 crc kubenswrapper[4624]: E0228 04:44:29.569617 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833\": container with ID starting with 6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833 not found: ID does not exist" containerID="6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.569664 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833"} err="failed to get container status \"6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833\": rpc error: code = NotFound desc = could not find container \"6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833\": container with ID starting with 6f326a675cc12fd7d596c2d8e8130c0d547aac0aaa275e7cf931f22f9e6f4833 not found: ID does not exist" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.569693 4624 scope.go:117] "RemoveContainer" containerID="73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14" Feb 28 04:44:29 crc kubenswrapper[4624]: E0228 04:44:29.569924 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14\": container with ID starting with 73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14 not found: ID does not exist" containerID="73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.569955 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14"} err="failed to get container status \"73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14\": rpc error: code = NotFound desc = could not find container \"73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14\": container with ID starting with 73125140cc973c80ab08c60656e010b5a7136acc37e20a5f65d21b3ea1b6ef14 not found: ID does not exist" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.569968 4624 scope.go:117] "RemoveContainer" containerID="f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac" Feb 28 04:44:29 crc kubenswrapper[4624]: E0228 04:44:29.570255 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac\": container with ID starting with f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac not found: ID does not exist" containerID="f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac" Feb 28 04:44:29 crc kubenswrapper[4624]: I0228 04:44:29.570297 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac"} err="failed to get container status \"f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac\": rpc error: code = NotFound desc = could not find container \"f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac\": container with ID starting with f3fb9b378cb95be370e04d0ea8e44cb9689c72ffed9ca1990f99dabef0e7ecac not found: ID does not exist" Feb 28 04:44:30 crc kubenswrapper[4624]: I0228 04:44:30.099473 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" path="/var/lib/kubelet/pods/ca354bd8-8d08-42b9-af85-20706f0792bd/volumes" Feb 28 04:44:43 crc kubenswrapper[4624]: I0228 04:44:43.476316 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-dj7kv_0633733e-c39d-4767-883b-e1b16be08190/kube-rbac-proxy/0.log" Feb 28 04:44:43 crc kubenswrapper[4624]: I0228 04:44:43.503989 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-dj7kv_0633733e-c39d-4767-883b-e1b16be08190/controller/0.log" Feb 28 04:44:44 crc kubenswrapper[4624]: I0228 04:44:44.188102 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:44:44 crc kubenswrapper[4624]: I0228 04:44:44.264996 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:44:44 crc kubenswrapper[4624]: I0228 04:44:44.312761 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:44:44 crc kubenswrapper[4624]: I0228 04:44:44.315183 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:44:44 crc kubenswrapper[4624]: I0228 04:44:44.447323 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.035160 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.049127 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.104911 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.151532 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.409810 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/controller/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.434045 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-metrics/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.441745 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-frr-files/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.453633 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/cp-reloader/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.641577 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/frr-metrics/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.658980 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/kube-rbac-proxy/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.699786 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/kube-rbac-proxy-frr/0.log" Feb 28 04:44:45 crc kubenswrapper[4624]: I0228 04:44:45.919688 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/reloader/0.log" Feb 28 04:44:46 crc kubenswrapper[4624]: I0228 04:44:46.031584 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-8dm5w_a14d415d-3a62-412a-8c98-13543a8bb573/frr-k8s-webhook-server/0.log" Feb 28 04:44:46 crc kubenswrapper[4624]: I0228 04:44:46.477613 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6647b8f4b6-mkvwl_1ad85c59-61bb-4658-8e2a-cdd409e54b3d/manager/0.log" Feb 28 04:44:46 crc kubenswrapper[4624]: I0228 04:44:46.997923 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-694dbf9577-jnbcr_7be65953-83ce-403e-aac6-443ced5b772b/webhook-server/0.log" Feb 28 04:44:47 crc kubenswrapper[4624]: I0228 04:44:47.092578 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8qstw_ebe5dd19-2d46-4a69-9847-bc91d0cd4423/kube-rbac-proxy/0.log" Feb 28 04:44:47 crc kubenswrapper[4624]: I0228 04:44:47.514043 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpl69_9c99c5b7-87a8-483e-8c66-2f5918d657c0/frr/0.log" Feb 28 04:44:47 crc kubenswrapper[4624]: I0228 04:44:47.649557 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8qstw_ebe5dd19-2d46-4a69-9847-bc91d0cd4423/speaker/0.log" Feb 28 04:44:49 crc kubenswrapper[4624]: I0228 04:44:49.539985 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:44:49 crc kubenswrapper[4624]: I0228 04:44:49.540379 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.151805 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj"] Feb 28 04:45:00 crc kubenswrapper[4624]: E0228 04:45:00.153314 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="registry-server" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.153332 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="registry-server" Feb 28 04:45:00 crc kubenswrapper[4624]: E0228 04:45:00.153348 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="extract-utilities" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.153355 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="extract-utilities" Feb 28 04:45:00 crc kubenswrapper[4624]: E0228 04:45:00.153401 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="extract-content" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.153407 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="extract-content" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.153649 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca354bd8-8d08-42b9-af85-20706f0792bd" containerName="registry-server" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.154568 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.160112 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.160112 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.166794 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj"] Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.270373 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g29n2\" (UniqueName: \"kubernetes.io/projected/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-kube-api-access-g29n2\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.270437 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-config-volume\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.270996 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-secret-volume\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.373507 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-secret-volume\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.373641 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g29n2\" (UniqueName: \"kubernetes.io/projected/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-kube-api-access-g29n2\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.373683 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-config-volume\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.374840 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-config-volume\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.390934 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-secret-volume\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.393664 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g29n2\" (UniqueName: \"kubernetes.io/projected/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-kube-api-access-g29n2\") pod \"collect-profiles-29537565-pbvwj\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.479102 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:00 crc kubenswrapper[4624]: I0228 04:45:00.962367 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj"] Feb 28 04:45:01 crc kubenswrapper[4624]: I0228 04:45:01.783467 4624 generic.go:334] "Generic (PLEG): container finished" podID="aaba9d66-b26c-4176-9d4b-d3e68667e2b7" containerID="7369b86b6eb679bf98d13ea54681016810c8f675bb66943b5003fc96a36388a8" exitCode=0 Feb 28 04:45:01 crc kubenswrapper[4624]: I0228 04:45:01.783646 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" event={"ID":"aaba9d66-b26c-4176-9d4b-d3e68667e2b7","Type":"ContainerDied","Data":"7369b86b6eb679bf98d13ea54681016810c8f675bb66943b5003fc96a36388a8"} Feb 28 04:45:01 crc kubenswrapper[4624]: I0228 04:45:01.783788 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" event={"ID":"aaba9d66-b26c-4176-9d4b-d3e68667e2b7","Type":"ContainerStarted","Data":"5e9a788ad00f2659ca4e04cfd57688f20ae09ff1a104535edbde0eede0386ba9"} Feb 28 04:45:02 crc kubenswrapper[4624]: I0228 04:45:02.910317 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/util/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.320406 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.346468 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/pull/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.450466 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g29n2\" (UniqueName: \"kubernetes.io/projected/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-kube-api-access-g29n2\") pod \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.450528 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-config-volume\") pod \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.450581 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-secret-volume\") pod \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\" (UID: \"aaba9d66-b26c-4176-9d4b-d3e68667e2b7\") " Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.451465 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "aaba9d66-b26c-4176-9d4b-d3e68667e2b7" (UID: "aaba9d66-b26c-4176-9d4b-d3e68667e2b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.462386 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aaba9d66-b26c-4176-9d4b-d3e68667e2b7" (UID: "aaba9d66-b26c-4176-9d4b-d3e68667e2b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.462656 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-kube-api-access-g29n2" (OuterVolumeSpecName: "kube-api-access-g29n2") pod "aaba9d66-b26c-4176-9d4b-d3e68667e2b7" (UID: "aaba9d66-b26c-4176-9d4b-d3e68667e2b7"). InnerVolumeSpecName "kube-api-access-g29n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.470383 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/pull/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.498756 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/util/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.553574 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g29n2\" (UniqueName: \"kubernetes.io/projected/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-kube-api-access-g29n2\") on node \"crc\" DevicePath \"\"" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.553610 4624 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.553620 4624 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaba9d66-b26c-4176-9d4b-d3e68667e2b7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.705044 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/util/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.749652 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/extract/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.772775 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a8272hlb_38b1807a-dd56-4dbe-9794-5f2de7b1b33f/pull/0.log" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.805064 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" event={"ID":"aaba9d66-b26c-4176-9d4b-d3e68667e2b7","Type":"ContainerDied","Data":"5e9a788ad00f2659ca4e04cfd57688f20ae09ff1a104535edbde0eede0386ba9"} Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.805131 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e9a788ad00f2659ca4e04cfd57688f20ae09ff1a104535edbde0eede0386ba9" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.805575 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-pbvwj" Feb 28 04:45:03 crc kubenswrapper[4624]: I0228 04:45:03.969359 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-utilities/0.log" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.156163 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-content/0.log" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.188263 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-utilities/0.log" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.203879 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-content/0.log" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.304977 4624 scope.go:117] "RemoveContainer" containerID="5f66edaddde02cfa72d469fc64ecb27da439eaf4a4e3174b26ed22f5371a169b" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.431643 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx"] Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.442919 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537520-6mkpx"] Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.448107 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-utilities/0.log" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.497646 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/extract-content/0.log" Feb 28 04:45:04 crc kubenswrapper[4624]: I0228 04:45:04.787284 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-utilities/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.136806 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-utilities/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.211222 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d7796_a8005398-5d8f-4adc-ae71-c01babe23241/registry-server/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.228757 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-content/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.229238 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-content/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.418930 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-content/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.456618 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/extract-utilities/0.log" Feb 28 04:45:05 crc kubenswrapper[4624]: I0228 04:45:05.691950 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/util/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.014441 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/pull/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.024051 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/pull/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.058050 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/util/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.113350 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0bb789-cb32-47a1-9a3d-38658ad2cb80" path="/var/lib/kubelet/pods/1d0bb789-cb32-47a1-9a3d-38658ad2cb80/volumes" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.233923 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qq4nj_d8226194-dd4d-461d-854a-131191db31f4/registry-server/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.324820 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/pull/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.345965 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/util/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.349567 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4k497n_6c955210-207a-4dc2-9be3-52ea5702de08/extract/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.571166 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-m6bvr_08a27942-dc8c-4905-b3d3-7202aae79787/marketplace-operator/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.691403 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-utilities/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.936353 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-utilities/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.981386 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-content/0.log" Feb 28 04:45:06 crc kubenswrapper[4624]: I0228 04:45:06.988413 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-content/0.log" Feb 28 04:45:07 crc kubenswrapper[4624]: I0228 04:45:07.669641 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-utilities/0.log" Feb 28 04:45:07 crc kubenswrapper[4624]: I0228 04:45:07.804448 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/extract-content/0.log" Feb 28 04:45:07 crc kubenswrapper[4624]: I0228 04:45:07.890423 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xnmfd_e222997d-b739-4944-90b6-ad421288f50a/registry-server/0.log" Feb 28 04:45:07 crc kubenswrapper[4624]: I0228 04:45:07.931656 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-utilities/0.log" Feb 28 04:45:08 crc kubenswrapper[4624]: I0228 04:45:08.050265 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-content/0.log" Feb 28 04:45:08 crc kubenswrapper[4624]: I0228 04:45:08.056790 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-utilities/0.log" Feb 28 04:45:08 crc kubenswrapper[4624]: I0228 04:45:08.116010 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-content/0.log" Feb 28 04:45:08 crc kubenswrapper[4624]: I0228 04:45:08.305451 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-content/0.log" Feb 28 04:45:08 crc kubenswrapper[4624]: I0228 04:45:08.341721 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/extract-utilities/0.log" Feb 28 04:45:09 crc kubenswrapper[4624]: I0228 04:45:09.054832 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mm8x2_e9ef4d32-8412-48d6-b08f-7230cd574d66/registry-server/0.log" Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.540111 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.540710 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.540780 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.541692 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdfb1cf3f9040f3df4e179148406e99de62a8bafd6e00cd6ed8e57a52338ea14"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.541764 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://fdfb1cf3f9040f3df4e179148406e99de62a8bafd6e00cd6ed8e57a52338ea14" gracePeriod=600 Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.979611 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="fdfb1cf3f9040f3df4e179148406e99de62a8bafd6e00cd6ed8e57a52338ea14" exitCode=0 Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.979652 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"fdfb1cf3f9040f3df4e179148406e99de62a8bafd6e00cd6ed8e57a52338ea14"} Feb 28 04:45:19 crc kubenswrapper[4624]: I0228 04:45:19.979684 4624 scope.go:117] "RemoveContainer" containerID="4fdaccc32c0f5072d73a7f24f866da0c7191a7bf9bca95788c76c0d726d9048a" Feb 28 04:45:21 crc kubenswrapper[4624]: I0228 04:45:21.003177 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerStarted","Data":"d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4"} Feb 28 04:45:57 crc kubenswrapper[4624]: I0228 04:45:57.755047 4624 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-775c6bbdc-lvbk6" podUID="7ad1dc2e-c4ec-4d5a-af14-cf0fc6db7b41" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.144254 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537566-sqrzf"] Feb 28 04:46:00 crc kubenswrapper[4624]: E0228 04:46:00.145126 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaba9d66-b26c-4176-9d4b-d3e68667e2b7" containerName="collect-profiles" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.145141 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaba9d66-b26c-4176-9d4b-d3e68667e2b7" containerName="collect-profiles" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.145374 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaba9d66-b26c-4176-9d4b-d3e68667e2b7" containerName="collect-profiles" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.146109 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.148460 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.148609 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.149433 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.156583 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-sqrzf"] Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.273884 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7v6d\" (UniqueName: \"kubernetes.io/projected/ffb5e204-101e-4af8-8b9c-263fc0c81b87-kube-api-access-b7v6d\") pod \"auto-csr-approver-29537566-sqrzf\" (UID: \"ffb5e204-101e-4af8-8b9c-263fc0c81b87\") " pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.375904 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7v6d\" (UniqueName: \"kubernetes.io/projected/ffb5e204-101e-4af8-8b9c-263fc0c81b87-kube-api-access-b7v6d\") pod \"auto-csr-approver-29537566-sqrzf\" (UID: \"ffb5e204-101e-4af8-8b9c-263fc0c81b87\") " pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.397455 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7v6d\" (UniqueName: \"kubernetes.io/projected/ffb5e204-101e-4af8-8b9c-263fc0c81b87-kube-api-access-b7v6d\") pod \"auto-csr-approver-29537566-sqrzf\" (UID: \"ffb5e204-101e-4af8-8b9c-263fc0c81b87\") " pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.499760 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.992818 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-sqrzf"] Feb 28 04:46:00 crc kubenswrapper[4624]: I0228 04:46:00.994155 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:46:01 crc kubenswrapper[4624]: I0228 04:46:01.368966 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" event={"ID":"ffb5e204-101e-4af8-8b9c-263fc0c81b87","Type":"ContainerStarted","Data":"2ef23c72df5828d4cbe1680bdbe143f419ba15c742a6f2ed3d9afe8a3d1d8a00"} Feb 28 04:46:03 crc kubenswrapper[4624]: I0228 04:46:03.392495 4624 generic.go:334] "Generic (PLEG): container finished" podID="ffb5e204-101e-4af8-8b9c-263fc0c81b87" containerID="20283cb9e0943c8b4e826802825a2c36f0b6699ac2efeef32cb5c192756113b4" exitCode=0 Feb 28 04:46:03 crc kubenswrapper[4624]: I0228 04:46:03.392723 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" event={"ID":"ffb5e204-101e-4af8-8b9c-263fc0c81b87","Type":"ContainerDied","Data":"20283cb9e0943c8b4e826802825a2c36f0b6699ac2efeef32cb5c192756113b4"} Feb 28 04:46:04 crc kubenswrapper[4624]: I0228 04:46:04.444429 4624 scope.go:117] "RemoveContainer" containerID="ad25b920c8bfb1525d2215bfa2afb7e41b7cf89958ea8514fa13403aa82364dc" Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.294342 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.374178 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7v6d\" (UniqueName: \"kubernetes.io/projected/ffb5e204-101e-4af8-8b9c-263fc0c81b87-kube-api-access-b7v6d\") pod \"ffb5e204-101e-4af8-8b9c-263fc0c81b87\" (UID: \"ffb5e204-101e-4af8-8b9c-263fc0c81b87\") " Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.391536 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb5e204-101e-4af8-8b9c-263fc0c81b87-kube-api-access-b7v6d" (OuterVolumeSpecName: "kube-api-access-b7v6d") pod "ffb5e204-101e-4af8-8b9c-263fc0c81b87" (UID: "ffb5e204-101e-4af8-8b9c-263fc0c81b87"). InnerVolumeSpecName "kube-api-access-b7v6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.413013 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" event={"ID":"ffb5e204-101e-4af8-8b9c-263fc0c81b87","Type":"ContainerDied","Data":"2ef23c72df5828d4cbe1680bdbe143f419ba15c742a6f2ed3d9afe8a3d1d8a00"} Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.413051 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef23c72df5828d4cbe1680bdbe143f419ba15c742a6f2ed3d9afe8a3d1d8a00" Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.413060 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-sqrzf" Feb 28 04:46:05 crc kubenswrapper[4624]: I0228 04:46:05.478576 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7v6d\" (UniqueName: \"kubernetes.io/projected/ffb5e204-101e-4af8-8b9c-263fc0c81b87-kube-api-access-b7v6d\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:06 crc kubenswrapper[4624]: I0228 04:46:06.380965 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-vtl95"] Feb 28 04:46:06 crc kubenswrapper[4624]: I0228 04:46:06.393105 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-vtl95"] Feb 28 04:46:08 crc kubenswrapper[4624]: I0228 04:46:08.100410 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2600acf-2ea0-4398-97a8-a2aeae548e6a" path="/var/lib/kubelet/pods/f2600acf-2ea0-4398-97a8-a2aeae548e6a/volumes" Feb 28 04:47:04 crc kubenswrapper[4624]: I0228 04:47:04.556072 4624 scope.go:117] "RemoveContainer" containerID="9e4b4dca62196265e37eed5040869306c64ed027a01f20abd2c4a7c2de0fd7aa" Feb 28 04:47:04 crc kubenswrapper[4624]: I0228 04:47:04.609446 4624 scope.go:117] "RemoveContainer" containerID="a2fef17f63c9c8d81b27d338cb4b9a508029dc25caf9d7fe65aef7471d15addb" Feb 28 04:47:25 crc kubenswrapper[4624]: I0228 04:47:25.336119 4624 generic.go:334] "Generic (PLEG): container finished" podID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerID="690b2051232960e2ae8d304a90855f1773a80109db67607d79ef5067e749b2ec" exitCode=0 Feb 28 04:47:25 crc kubenswrapper[4624]: I0228 04:47:25.336697 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zrs82/must-gather-8hw26" event={"ID":"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec","Type":"ContainerDied","Data":"690b2051232960e2ae8d304a90855f1773a80109db67607d79ef5067e749b2ec"} Feb 28 04:47:25 crc kubenswrapper[4624]: I0228 04:47:25.337442 4624 scope.go:117] "RemoveContainer" containerID="690b2051232960e2ae8d304a90855f1773a80109db67607d79ef5067e749b2ec" Feb 28 04:47:25 crc kubenswrapper[4624]: I0228 04:47:25.518183 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zrs82_must-gather-8hw26_530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec/gather/0.log" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.208712 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zrs82/must-gather-8hw26"] Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.209354 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zrs82/must-gather-8hw26" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="copy" containerID="cri-o://b6807c908b273bcf0c28f0a5f37ba67430d8e82aaf9f2b0d7b83548efe0844a7" gracePeriod=2 Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.219227 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zrs82/must-gather-8hw26"] Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.506561 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zrs82_must-gather-8hw26_530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec/copy/0.log" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.507009 4624 generic.go:334] "Generic (PLEG): container finished" podID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerID="b6807c908b273bcf0c28f0a5f37ba67430d8e82aaf9f2b0d7b83548efe0844a7" exitCode=143 Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.646276 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zrs82_must-gather-8hw26_530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec/copy/0.log" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.647108 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.780158 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-must-gather-output\") pod \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.780649 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml94m\" (UniqueName: \"kubernetes.io/projected/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-kube-api-access-ml94m\") pod \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\" (UID: \"530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec\") " Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.796410 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-kube-api-access-ml94m" (OuterVolumeSpecName: "kube-api-access-ml94m") pod "530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" (UID: "530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec"). InnerVolumeSpecName "kube-api-access-ml94m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.882745 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml94m\" (UniqueName: \"kubernetes.io/projected/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-kube-api-access-ml94m\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.958474 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" (UID: "530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:47:38 crc kubenswrapper[4624]: I0228 04:47:38.985211 4624 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:39 crc kubenswrapper[4624]: I0228 04:47:39.515893 4624 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zrs82_must-gather-8hw26_530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec/copy/0.log" Feb 28 04:47:39 crc kubenswrapper[4624]: I0228 04:47:39.516607 4624 scope.go:117] "RemoveContainer" containerID="b6807c908b273bcf0c28f0a5f37ba67430d8e82aaf9f2b0d7b83548efe0844a7" Feb 28 04:47:39 crc kubenswrapper[4624]: I0228 04:47:39.516759 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zrs82/must-gather-8hw26" Feb 28 04:47:39 crc kubenswrapper[4624]: I0228 04:47:39.539324 4624 scope.go:117] "RemoveContainer" containerID="690b2051232960e2ae8d304a90855f1773a80109db67607d79ef5067e749b2ec" Feb 28 04:47:40 crc kubenswrapper[4624]: I0228 04:47:40.096390 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" path="/var/lib/kubelet/pods/530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec/volumes" Feb 28 04:47:49 crc kubenswrapper[4624]: I0228 04:47:49.540204 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:47:49 crc kubenswrapper[4624]: I0228 04:47:49.540738 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.149000 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537568-w6krh"] Feb 28 04:48:00 crc kubenswrapper[4624]: E0228 04:48:00.149906 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb5e204-101e-4af8-8b9c-263fc0c81b87" containerName="oc" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.149919 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb5e204-101e-4af8-8b9c-263fc0c81b87" containerName="oc" Feb 28 04:48:00 crc kubenswrapper[4624]: E0228 04:48:00.149939 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="copy" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.149946 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="copy" Feb 28 04:48:00 crc kubenswrapper[4624]: E0228 04:48:00.149966 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="gather" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.149972 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="gather" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.150169 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="copy" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.150190 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb5e204-101e-4af8-8b9c-263fc0c81b87" containerName="oc" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.150202 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="530a2e8c-9eb4-4691-ae0b-8c2dd0da33ec" containerName="gather" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.150769 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.156115 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.156323 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.156441 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.170546 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537568-w6krh"] Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.237068 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkxbr\" (UniqueName: \"kubernetes.io/projected/cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a-kube-api-access-kkxbr\") pod \"auto-csr-approver-29537568-w6krh\" (UID: \"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a\") " pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.338866 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkxbr\" (UniqueName: \"kubernetes.io/projected/cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a-kube-api-access-kkxbr\") pod \"auto-csr-approver-29537568-w6krh\" (UID: \"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a\") " pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.368451 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkxbr\" (UniqueName: \"kubernetes.io/projected/cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a-kube-api-access-kkxbr\") pod \"auto-csr-approver-29537568-w6krh\" (UID: \"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a\") " pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:00 crc kubenswrapper[4624]: I0228 04:48:00.466379 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:01 crc kubenswrapper[4624]: I0228 04:48:01.262802 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537568-w6krh"] Feb 28 04:48:01 crc kubenswrapper[4624]: I0228 04:48:01.707233 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-w6krh" event={"ID":"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a","Type":"ContainerStarted","Data":"10c59172568adeb0bf56738101b29625a9f21e82944634e2fafec3a96e59c663"} Feb 28 04:48:02 crc kubenswrapper[4624]: I0228 04:48:02.718920 4624 generic.go:334] "Generic (PLEG): container finished" podID="cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a" containerID="04f09cbc34c076924fe10350d93c5515de2b581c49fdcf9b46e8e7dcbb7c9675" exitCode=0 Feb 28 04:48:02 crc kubenswrapper[4624]: I0228 04:48:02.719040 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-w6krh" event={"ID":"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a","Type":"ContainerDied","Data":"04f09cbc34c076924fe10350d93c5515de2b581c49fdcf9b46e8e7dcbb7c9675"} Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.091250 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.118769 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkxbr\" (UniqueName: \"kubernetes.io/projected/cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a-kube-api-access-kkxbr\") pod \"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a\" (UID: \"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a\") " Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.130323 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a-kube-api-access-kkxbr" (OuterVolumeSpecName: "kube-api-access-kkxbr") pod "cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a" (UID: "cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a"). InnerVolumeSpecName "kube-api-access-kkxbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.221845 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkxbr\" (UniqueName: \"kubernetes.io/projected/cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a-kube-api-access-kkxbr\") on node \"crc\" DevicePath \"\"" Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.740246 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-w6krh" event={"ID":"cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a","Type":"ContainerDied","Data":"10c59172568adeb0bf56738101b29625a9f21e82944634e2fafec3a96e59c663"} Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.740485 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c59172568adeb0bf56738101b29625a9f21e82944634e2fafec3a96e59c663" Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.740325 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-w6krh" Feb 28 04:48:04 crc kubenswrapper[4624]: I0228 04:48:04.783772 4624 scope.go:117] "RemoveContainer" containerID="013ca0117a6fb3c4255eab4a78474f8ca0018781f7384e29a51f2a7d5074715e" Feb 28 04:48:05 crc kubenswrapper[4624]: I0228 04:48:05.168653 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-tbwph"] Feb 28 04:48:05 crc kubenswrapper[4624]: I0228 04:48:05.177836 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-tbwph"] Feb 28 04:48:06 crc kubenswrapper[4624]: I0228 04:48:06.100593 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd2fdae-47d8-468e-99e0-40a3f203b16b" path="/var/lib/kubelet/pods/9bd2fdae-47d8-468e-99e0-40a3f203b16b/volumes" Feb 28 04:48:19 crc kubenswrapper[4624]: I0228 04:48:19.539828 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:48:19 crc kubenswrapper[4624]: I0228 04:48:19.540432 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:48:49 crc kubenswrapper[4624]: I0228 04:48:49.539527 4624 patch_prober.go:28] interesting pod/machine-config-daemon-mbfnv container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:48:49 crc kubenswrapper[4624]: I0228 04:48:49.540622 4624 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:48:49 crc kubenswrapper[4624]: I0228 04:48:49.540871 4624 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" Feb 28 04:48:49 crc kubenswrapper[4624]: I0228 04:48:49.542642 4624 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4"} pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:48:49 crc kubenswrapper[4624]: I0228 04:48:49.542750 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" containerName="machine-config-daemon" containerID="cri-o://d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" gracePeriod=600 Feb 28 04:48:49 crc kubenswrapper[4624]: E0228 04:48:49.692245 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:48:50 crc kubenswrapper[4624]: I0228 04:48:50.225168 4624 generic.go:334] "Generic (PLEG): container finished" podID="a8ccd115-f935-454b-94cc-26327d5df491" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" exitCode=0 Feb 28 04:48:50 crc kubenswrapper[4624]: I0228 04:48:50.225221 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" event={"ID":"a8ccd115-f935-454b-94cc-26327d5df491","Type":"ContainerDied","Data":"d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4"} Feb 28 04:48:50 crc kubenswrapper[4624]: I0228 04:48:50.225266 4624 scope.go:117] "RemoveContainer" containerID="fdfb1cf3f9040f3df4e179148406e99de62a8bafd6e00cd6ed8e57a52338ea14" Feb 28 04:48:50 crc kubenswrapper[4624]: I0228 04:48:50.225941 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:48:50 crc kubenswrapper[4624]: E0228 04:48:50.226291 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:49:02 crc kubenswrapper[4624]: I0228 04:49:02.089433 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:49:02 crc kubenswrapper[4624]: E0228 04:49:02.090475 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:49:04 crc kubenswrapper[4624]: I0228 04:49:04.862263 4624 scope.go:117] "RemoveContainer" containerID="581159b51f85d2e02fd7d8b90f86ad081a01550b6264f8c21ae7e8a3f87c8267" Feb 28 04:49:16 crc kubenswrapper[4624]: I0228 04:49:16.092146 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:49:16 crc kubenswrapper[4624]: E0228 04:49:16.093542 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:49:31 crc kubenswrapper[4624]: I0228 04:49:31.086942 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:49:31 crc kubenswrapper[4624]: E0228 04:49:31.087740 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:49:46 crc kubenswrapper[4624]: I0228 04:49:46.118389 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:49:46 crc kubenswrapper[4624]: E0228 04:49:46.119115 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:49:59 crc kubenswrapper[4624]: I0228 04:49:59.088435 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:49:59 crc kubenswrapper[4624]: E0228 04:49:59.091505 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.148409 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537570-64djr"] Feb 28 04:50:00 crc kubenswrapper[4624]: E0228 04:50:00.149105 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a" containerName="oc" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.149118 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a" containerName="oc" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.149389 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe6aed3-d2b5-43d3-84b7-cec36bc3a89a" containerName="oc" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.150013 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.152557 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.152713 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.159058 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.164055 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537570-64djr"] Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.280525 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65vd7\" (UniqueName: \"kubernetes.io/projected/90aad764-4ace-48f1-9977-c9e5a9a49b91-kube-api-access-65vd7\") pod \"auto-csr-approver-29537570-64djr\" (UID: \"90aad764-4ace-48f1-9977-c9e5a9a49b91\") " pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.382809 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65vd7\" (UniqueName: \"kubernetes.io/projected/90aad764-4ace-48f1-9977-c9e5a9a49b91-kube-api-access-65vd7\") pod \"auto-csr-approver-29537570-64djr\" (UID: \"90aad764-4ace-48f1-9977-c9e5a9a49b91\") " pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.403653 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65vd7\" (UniqueName: \"kubernetes.io/projected/90aad764-4ace-48f1-9977-c9e5a9a49b91-kube-api-access-65vd7\") pod \"auto-csr-approver-29537570-64djr\" (UID: \"90aad764-4ace-48f1-9977-c9e5a9a49b91\") " pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.468429 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.940205 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537570-64djr"] Feb 28 04:50:00 crc kubenswrapper[4624]: I0228 04:50:00.965079 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-64djr" event={"ID":"90aad764-4ace-48f1-9977-c9e5a9a49b91","Type":"ContainerStarted","Data":"4f7cdc6cde818de8f3e71dd3b64848a8c9d5b445290aec65e2fd0abacadb7921"} Feb 28 04:50:01 crc kubenswrapper[4624]: I0228 04:50:01.818276 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ncgm6"] Feb 28 04:50:01 crc kubenswrapper[4624]: I0228 04:50:01.821196 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:01 crc kubenswrapper[4624]: I0228 04:50:01.832875 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncgm6"] Feb 28 04:50:01 crc kubenswrapper[4624]: I0228 04:50:01.922614 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjhcz\" (UniqueName: \"kubernetes.io/projected/834a940b-7fef-4038-b136-46bc3b913f7d-kube-api-access-tjhcz\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:01 crc kubenswrapper[4624]: I0228 04:50:01.922816 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-utilities\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:01 crc kubenswrapper[4624]: I0228 04:50:01.922910 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-catalog-content\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.003157 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8gc7v"] Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.005484 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.024570 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-utilities\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.024690 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-catalog-content\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.024753 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjhcz\" (UniqueName: \"kubernetes.io/projected/834a940b-7fef-4038-b136-46bc3b913f7d-kube-api-access-tjhcz\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.025412 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-utilities\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.025939 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-catalog-content\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.036921 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gc7v"] Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.072551 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjhcz\" (UniqueName: \"kubernetes.io/projected/834a940b-7fef-4038-b136-46bc3b913f7d-kube-api-access-tjhcz\") pod \"community-operators-ncgm6\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.126959 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-utilities\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.127125 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfgd\" (UniqueName: \"kubernetes.io/projected/359c284e-de7a-4d5e-82df-19ccea74f7ee-kube-api-access-6qfgd\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.127184 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-catalog-content\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.203138 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.230470 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfgd\" (UniqueName: \"kubernetes.io/projected/359c284e-de7a-4d5e-82df-19ccea74f7ee-kube-api-access-6qfgd\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.230564 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-catalog-content\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.230704 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-utilities\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.231136 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-utilities\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.231686 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-catalog-content\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.256037 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfgd\" (UniqueName: \"kubernetes.io/projected/359c284e-de7a-4d5e-82df-19ccea74f7ee-kube-api-access-6qfgd\") pod \"redhat-marketplace-8gc7v\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.335987 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.908933 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncgm6"] Feb 28 04:50:02 crc kubenswrapper[4624]: I0228 04:50:02.998657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-64djr" event={"ID":"90aad764-4ace-48f1-9977-c9e5a9a49b91","Type":"ContainerStarted","Data":"0872dd1d5c170b6636f96a1256a8ff01471883f03b83d08777012a1cd74d25e9"} Feb 28 04:50:03 crc kubenswrapper[4624]: I0228 04:50:03.019599 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537570-64djr" podStartSLOduration=2.205304123 podStartE2EDuration="3.019578774s" podCreationTimestamp="2026-02-28 04:50:00 +0000 UTC" firstStartedPulling="2026-02-28 04:50:00.941657531 +0000 UTC m=+4455.605696830" lastFinishedPulling="2026-02-28 04:50:01.755932172 +0000 UTC m=+4456.419971481" observedRunningTime="2026-02-28 04:50:03.014615081 +0000 UTC m=+4457.678654390" watchObservedRunningTime="2026-02-28 04:50:03.019578774 +0000 UTC m=+4457.683618083" Feb 28 04:50:03 crc kubenswrapper[4624]: I0228 04:50:03.061455 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gc7v"] Feb 28 04:50:03 crc kubenswrapper[4624]: W0228 04:50:03.148488 4624 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod834a940b_7fef_4038_b136_46bc3b913f7d.slice/crio-4901ef14e27f0a8c191eb812a7256ca9a5a0c53995bec74e954cbd8b174c5b9b WatchSource:0}: Error finding container 4901ef14e27f0a8c191eb812a7256ca9a5a0c53995bec74e954cbd8b174c5b9b: Status 404 returned error can't find the container with id 4901ef14e27f0a8c191eb812a7256ca9a5a0c53995bec74e954cbd8b174c5b9b Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.009671 4624 generic.go:334] "Generic (PLEG): container finished" podID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerID="e132398d2ee798eba7d97d615d929bd2d4c8e547a3726f66c630aa306c276dd6" exitCode=0 Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.010114 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerDied","Data":"e132398d2ee798eba7d97d615d929bd2d4c8e547a3726f66c630aa306c276dd6"} Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.010144 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerStarted","Data":"f68d571293aefb200544de1e763c1ca9bc24bbe0967152d926d14d8f9d55b789"} Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.014595 4624 generic.go:334] "Generic (PLEG): container finished" podID="90aad764-4ace-48f1-9977-c9e5a9a49b91" containerID="0872dd1d5c170b6636f96a1256a8ff01471883f03b83d08777012a1cd74d25e9" exitCode=0 Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.014657 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-64djr" event={"ID":"90aad764-4ace-48f1-9977-c9e5a9a49b91","Type":"ContainerDied","Data":"0872dd1d5c170b6636f96a1256a8ff01471883f03b83d08777012a1cd74d25e9"} Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.019373 4624 generic.go:334] "Generic (PLEG): container finished" podID="834a940b-7fef-4038-b136-46bc3b913f7d" containerID="17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407" exitCode=0 Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.019423 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerDied","Data":"17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407"} Feb 28 04:50:04 crc kubenswrapper[4624]: I0228 04:50:04.019456 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerStarted","Data":"4901ef14e27f0a8c191eb812a7256ca9a5a0c53995bec74e954cbd8b174c5b9b"} Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.498511 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.573116 4624 scope.go:117] "RemoveContainer" containerID="8ecca4f7632b0bbd1fbd0aec8b0b91791ce2f9c1385d8e35ea04c012db30f71a" Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.603315 4624 scope.go:117] "RemoveContainer" containerID="9c1b17f7115104fc2392d3e12e956c97e3950731a9a7d7e7f3bbced9ef6e8751" Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.617113 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65vd7\" (UniqueName: \"kubernetes.io/projected/90aad764-4ace-48f1-9977-c9e5a9a49b91-kube-api-access-65vd7\") pod \"90aad764-4ace-48f1-9977-c9e5a9a49b91\" (UID: \"90aad764-4ace-48f1-9977-c9e5a9a49b91\") " Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.630693 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90aad764-4ace-48f1-9977-c9e5a9a49b91-kube-api-access-65vd7" (OuterVolumeSpecName: "kube-api-access-65vd7") pod "90aad764-4ace-48f1-9977-c9e5a9a49b91" (UID: "90aad764-4ace-48f1-9977-c9e5a9a49b91"). InnerVolumeSpecName "kube-api-access-65vd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.643588 4624 scope.go:117] "RemoveContainer" containerID="23adc4493371fcb5b191cad664e8d133931655d65c72ddd1b7b1db3eb33886f1" Feb 28 04:50:05 crc kubenswrapper[4624]: I0228 04:50:05.720007 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65vd7\" (UniqueName: \"kubernetes.io/projected/90aad764-4ace-48f1-9977-c9e5a9a49b91-kube-api-access-65vd7\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.048541 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerStarted","Data":"58d6f996adbb5378797495ba2652dce239bf604ce5fb56bbec8cee281993edf5"} Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.050467 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-64djr" event={"ID":"90aad764-4ace-48f1-9977-c9e5a9a49b91","Type":"ContainerDied","Data":"4f7cdc6cde818de8f3e71dd3b64848a8c9d5b445290aec65e2fd0abacadb7921"} Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.050503 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7cdc6cde818de8f3e71dd3b64848a8c9d5b445290aec65e2fd0abacadb7921" Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.050534 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-64djr" Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.055855 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerStarted","Data":"8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f"} Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.129430 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-9ljtx"] Feb 28 04:50:06 crc kubenswrapper[4624]: I0228 04:50:06.140719 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-9ljtx"] Feb 28 04:50:08 crc kubenswrapper[4624]: I0228 04:50:08.082520 4624 generic.go:334] "Generic (PLEG): container finished" podID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerID="58d6f996adbb5378797495ba2652dce239bf604ce5fb56bbec8cee281993edf5" exitCode=0 Feb 28 04:50:08 crc kubenswrapper[4624]: I0228 04:50:08.082678 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerDied","Data":"58d6f996adbb5378797495ba2652dce239bf604ce5fb56bbec8cee281993edf5"} Feb 28 04:50:08 crc kubenswrapper[4624]: I0228 04:50:08.089769 4624 generic.go:334] "Generic (PLEG): container finished" podID="834a940b-7fef-4038-b136-46bc3b913f7d" containerID="8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f" exitCode=0 Feb 28 04:50:08 crc kubenswrapper[4624]: I0228 04:50:08.129346 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f4cff9-fdec-403a-a713-835337213c80" path="/var/lib/kubelet/pods/16f4cff9-fdec-403a-a713-835337213c80/volumes" Feb 28 04:50:08 crc kubenswrapper[4624]: I0228 04:50:08.130608 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerDied","Data":"8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f"} Feb 28 04:50:09 crc kubenswrapper[4624]: I0228 04:50:09.099667 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerStarted","Data":"6c50baf6c995fe7d54aeb16cb99287b42e8694745b1ffbbff66875170134720c"} Feb 28 04:50:09 crc kubenswrapper[4624]: I0228 04:50:09.101741 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerStarted","Data":"7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1"} Feb 28 04:50:09 crc kubenswrapper[4624]: I0228 04:50:09.135946 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8gc7v" podStartSLOduration=3.672848995 podStartE2EDuration="8.135925112s" podCreationTimestamp="2026-02-28 04:50:01 +0000 UTC" firstStartedPulling="2026-02-28 04:50:04.011718821 +0000 UTC m=+4458.675758130" lastFinishedPulling="2026-02-28 04:50:08.474794898 +0000 UTC m=+4463.138834247" observedRunningTime="2026-02-28 04:50:09.117326988 +0000 UTC m=+4463.781366297" watchObservedRunningTime="2026-02-28 04:50:09.135925112 +0000 UTC m=+4463.799964421" Feb 28 04:50:09 crc kubenswrapper[4624]: I0228 04:50:09.152863 4624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ncgm6" podStartSLOduration=3.6244525579999998 podStartE2EDuration="8.152837599s" podCreationTimestamp="2026-02-28 04:50:01 +0000 UTC" firstStartedPulling="2026-02-28 04:50:04.02128196 +0000 UTC m=+4458.685321269" lastFinishedPulling="2026-02-28 04:50:08.549667001 +0000 UTC m=+4463.213706310" observedRunningTime="2026-02-28 04:50:09.147519745 +0000 UTC m=+4463.811559074" watchObservedRunningTime="2026-02-28 04:50:09.152837599 +0000 UTC m=+4463.816876908" Feb 28 04:50:11 crc kubenswrapper[4624]: I0228 04:50:11.087863 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:50:11 crc kubenswrapper[4624]: E0228 04:50:11.088516 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:50:12 crc kubenswrapper[4624]: I0228 04:50:12.204296 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:12 crc kubenswrapper[4624]: I0228 04:50:12.204651 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:12 crc kubenswrapper[4624]: I0228 04:50:12.252577 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:12 crc kubenswrapper[4624]: I0228 04:50:12.336919 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:12 crc kubenswrapper[4624]: I0228 04:50:12.336979 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:12 crc kubenswrapper[4624]: I0228 04:50:12.382780 4624 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:13 crc kubenswrapper[4624]: I0228 04:50:13.195821 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:13 crc kubenswrapper[4624]: I0228 04:50:13.211597 4624 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.396526 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncgm6"] Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.397303 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ncgm6" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="registry-server" containerID="cri-o://7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1" gracePeriod=2 Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.583551 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gc7v"] Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.584324 4624 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8gc7v" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="registry-server" containerID="cri-o://6c50baf6c995fe7d54aeb16cb99287b42e8694745b1ffbbff66875170134720c" gracePeriod=2 Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.903283 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.938197 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-utilities\") pod \"834a940b-7fef-4038-b136-46bc3b913f7d\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.938288 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjhcz\" (UniqueName: \"kubernetes.io/projected/834a940b-7fef-4038-b136-46bc3b913f7d-kube-api-access-tjhcz\") pod \"834a940b-7fef-4038-b136-46bc3b913f7d\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.938324 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-catalog-content\") pod \"834a940b-7fef-4038-b136-46bc3b913f7d\" (UID: \"834a940b-7fef-4038-b136-46bc3b913f7d\") " Feb 28 04:50:15 crc kubenswrapper[4624]: I0228 04:50:15.939354 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-utilities" (OuterVolumeSpecName: "utilities") pod "834a940b-7fef-4038-b136-46bc3b913f7d" (UID: "834a940b-7fef-4038-b136-46bc3b913f7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.005733 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "834a940b-7fef-4038-b136-46bc3b913f7d" (UID: "834a940b-7fef-4038-b136-46bc3b913f7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.040399 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.040435 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834a940b-7fef-4038-b136-46bc3b913f7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.191067 4624 generic.go:334] "Generic (PLEG): container finished" podID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerID="6c50baf6c995fe7d54aeb16cb99287b42e8694745b1ffbbff66875170134720c" exitCode=0 Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.191504 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerDied","Data":"6c50baf6c995fe7d54aeb16cb99287b42e8694745b1ffbbff66875170134720c"} Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.194003 4624 generic.go:334] "Generic (PLEG): container finished" podID="834a940b-7fef-4038-b136-46bc3b913f7d" containerID="7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1" exitCode=0 Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.194044 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerDied","Data":"7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1"} Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.194099 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncgm6" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.194117 4624 scope.go:117] "RemoveContainer" containerID="7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.194101 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncgm6" event={"ID":"834a940b-7fef-4038-b136-46bc3b913f7d","Type":"ContainerDied","Data":"4901ef14e27f0a8c191eb812a7256ca9a5a0c53995bec74e954cbd8b174c5b9b"} Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.486650 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834a940b-7fef-4038-b136-46bc3b913f7d-kube-api-access-tjhcz" (OuterVolumeSpecName: "kube-api-access-tjhcz") pod "834a940b-7fef-4038-b136-46bc3b913f7d" (UID: "834a940b-7fef-4038-b136-46bc3b913f7d"). InnerVolumeSpecName "kube-api-access-tjhcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.495688 4624 scope.go:117] "RemoveContainer" containerID="8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.552060 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjhcz\" (UniqueName: \"kubernetes.io/projected/834a940b-7fef-4038-b136-46bc3b913f7d-kube-api-access-tjhcz\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.652212 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.683388 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncgm6"] Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.697467 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ncgm6"] Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.702367 4624 scope.go:117] "RemoveContainer" containerID="17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.733328 4624 scope.go:117] "RemoveContainer" containerID="7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1" Feb 28 04:50:16 crc kubenswrapper[4624]: E0228 04:50:16.734323 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1\": container with ID starting with 7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1 not found: ID does not exist" containerID="7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.734366 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1"} err="failed to get container status \"7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1\": rpc error: code = NotFound desc = could not find container \"7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1\": container with ID starting with 7713af52ca46a8aa9f4d071e47f7baa2b16794903dfa7cd3ba23ec7b96e536e1 not found: ID does not exist" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.734397 4624 scope.go:117] "RemoveContainer" containerID="8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f" Feb 28 04:50:16 crc kubenswrapper[4624]: E0228 04:50:16.734788 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f\": container with ID starting with 8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f not found: ID does not exist" containerID="8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.734820 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f"} err="failed to get container status \"8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f\": rpc error: code = NotFound desc = could not find container \"8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f\": container with ID starting with 8d6d284c0e73ec899927aac0a6dfc35c3a72d550e66a405c2878624de589d51f not found: ID does not exist" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.734840 4624 scope.go:117] "RemoveContainer" containerID="17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407" Feb 28 04:50:16 crc kubenswrapper[4624]: E0228 04:50:16.735142 4624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407\": container with ID starting with 17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407 not found: ID does not exist" containerID="17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.735172 4624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407"} err="failed to get container status \"17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407\": rpc error: code = NotFound desc = could not find container \"17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407\": container with ID starting with 17f8a87ef9fe01cf68f9be8c4d7367b1f1dbb713d6151713c72ea3f85574a407 not found: ID does not exist" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.753256 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qfgd\" (UniqueName: \"kubernetes.io/projected/359c284e-de7a-4d5e-82df-19ccea74f7ee-kube-api-access-6qfgd\") pod \"359c284e-de7a-4d5e-82df-19ccea74f7ee\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.753327 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-catalog-content\") pod \"359c284e-de7a-4d5e-82df-19ccea74f7ee\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.753508 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-utilities\") pod \"359c284e-de7a-4d5e-82df-19ccea74f7ee\" (UID: \"359c284e-de7a-4d5e-82df-19ccea74f7ee\") " Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.754779 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-utilities" (OuterVolumeSpecName: "utilities") pod "359c284e-de7a-4d5e-82df-19ccea74f7ee" (UID: "359c284e-de7a-4d5e-82df-19ccea74f7ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.757656 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359c284e-de7a-4d5e-82df-19ccea74f7ee-kube-api-access-6qfgd" (OuterVolumeSpecName: "kube-api-access-6qfgd") pod "359c284e-de7a-4d5e-82df-19ccea74f7ee" (UID: "359c284e-de7a-4d5e-82df-19ccea74f7ee"). InnerVolumeSpecName "kube-api-access-6qfgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.854801 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qfgd\" (UniqueName: \"kubernetes.io/projected/359c284e-de7a-4d5e-82df-19ccea74f7ee-kube-api-access-6qfgd\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.854838 4624 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:16 crc kubenswrapper[4624]: I0228 04:50:16.977603 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "359c284e-de7a-4d5e-82df-19ccea74f7ee" (UID: "359c284e-de7a-4d5e-82df-19ccea74f7ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.057914 4624 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359c284e-de7a-4d5e-82df-19ccea74f7ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.209176 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8gc7v" event={"ID":"359c284e-de7a-4d5e-82df-19ccea74f7ee","Type":"ContainerDied","Data":"f68d571293aefb200544de1e763c1ca9bc24bbe0967152d926d14d8f9d55b789"} Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.209284 4624 scope.go:117] "RemoveContainer" containerID="6c50baf6c995fe7d54aeb16cb99287b42e8694745b1ffbbff66875170134720c" Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.209315 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8gc7v" Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.244577 4624 scope.go:117] "RemoveContainer" containerID="58d6f996adbb5378797495ba2652dce239bf604ce5fb56bbec8cee281993edf5" Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.280963 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gc7v"] Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.282962 4624 scope.go:117] "RemoveContainer" containerID="e132398d2ee798eba7d97d615d929bd2d4c8e547a3726f66c630aa306c276dd6" Feb 28 04:50:17 crc kubenswrapper[4624]: I0228 04:50:17.301005 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8gc7v"] Feb 28 04:50:18 crc kubenswrapper[4624]: I0228 04:50:18.111818 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" path="/var/lib/kubelet/pods/359c284e-de7a-4d5e-82df-19ccea74f7ee/volumes" Feb 28 04:50:18 crc kubenswrapper[4624]: I0228 04:50:18.113405 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" path="/var/lib/kubelet/pods/834a940b-7fef-4038-b136-46bc3b913f7d/volumes" Feb 28 04:50:24 crc kubenswrapper[4624]: I0228 04:50:24.087475 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:50:24 crc kubenswrapper[4624]: E0228 04:50:24.088412 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:50:36 crc kubenswrapper[4624]: I0228 04:50:36.102367 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:50:36 crc kubenswrapper[4624]: E0228 04:50:36.103476 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:50:48 crc kubenswrapper[4624]: I0228 04:50:48.087421 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:50:48 crc kubenswrapper[4624]: E0228 04:50:48.088295 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:51:00 crc kubenswrapper[4624]: I0228 04:51:00.086917 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:51:00 crc kubenswrapper[4624]: E0228 04:51:00.087624 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:51:05 crc kubenswrapper[4624]: I0228 04:51:05.686929 4624 scope.go:117] "RemoveContainer" containerID="8040674f1ccf3028190d85e9c0ff1573f5594697fd07556cdbaf7423137c8bff" Feb 28 04:51:15 crc kubenswrapper[4624]: I0228 04:51:15.087364 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:51:15 crc kubenswrapper[4624]: E0228 04:51:15.088161 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:51:29 crc kubenswrapper[4624]: I0228 04:51:29.087709 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:51:29 crc kubenswrapper[4624]: E0228 04:51:29.088443 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:51:44 crc kubenswrapper[4624]: I0228 04:51:44.087756 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:51:44 crc kubenswrapper[4624]: E0228 04:51:44.089536 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:51:56 crc kubenswrapper[4624]: I0228 04:51:56.095338 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:51:56 crc kubenswrapper[4624]: E0228 04:51:56.096198 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.159898 4624 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537572-7fn59"] Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160530 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="registry-server" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160543 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="registry-server" Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160559 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="extract-content" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160565 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="extract-content" Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160574 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="extract-utilities" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160580 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="extract-utilities" Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160594 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90aad764-4ace-48f1-9977-c9e5a9a49b91" containerName="oc" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160600 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="90aad764-4ace-48f1-9977-c9e5a9a49b91" containerName="oc" Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160611 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="extract-content" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160616 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="extract-content" Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160625 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="registry-server" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160632 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="registry-server" Feb 28 04:52:00 crc kubenswrapper[4624]: E0228 04:52:00.160644 4624 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="extract-utilities" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160650 4624 state_mem.go:107] "Deleted CPUSet assignment" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="extract-utilities" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160818 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="359c284e-de7a-4d5e-82df-19ccea74f7ee" containerName="registry-server" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.160856 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="90aad764-4ace-48f1-9977-c9e5a9a49b91" containerName="oc" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.161074 4624 memory_manager.go:354] "RemoveStaleState removing state" podUID="834a940b-7fef-4038-b136-46bc3b913f7d" containerName="registry-server" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.161672 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.168041 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.168265 4624 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5hcl8" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.168303 4624 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.171072 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537572-7fn59"] Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.350100 4624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt26k\" (UniqueName: \"kubernetes.io/projected/208daee5-4e19-43c4-8fc1-52a1258d20fc-kube-api-access-kt26k\") pod \"auto-csr-approver-29537572-7fn59\" (UID: \"208daee5-4e19-43c4-8fc1-52a1258d20fc\") " pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.452270 4624 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt26k\" (UniqueName: \"kubernetes.io/projected/208daee5-4e19-43c4-8fc1-52a1258d20fc-kube-api-access-kt26k\") pod \"auto-csr-approver-29537572-7fn59\" (UID: \"208daee5-4e19-43c4-8fc1-52a1258d20fc\") " pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.473777 4624 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt26k\" (UniqueName: \"kubernetes.io/projected/208daee5-4e19-43c4-8fc1-52a1258d20fc-kube-api-access-kt26k\") pod \"auto-csr-approver-29537572-7fn59\" (UID: \"208daee5-4e19-43c4-8fc1-52a1258d20fc\") " pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.483327 4624 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.988377 4624 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537572-7fn59"] Feb 28 04:52:00 crc kubenswrapper[4624]: I0228 04:52:00.996566 4624 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:52:01 crc kubenswrapper[4624]: I0228 04:52:01.486683 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537572-7fn59" event={"ID":"208daee5-4e19-43c4-8fc1-52a1258d20fc","Type":"ContainerStarted","Data":"af9e4ab3e064d3103d42d88946f21451db583e432e27e6b91afbe79d597cecc5"} Feb 28 04:52:02 crc kubenswrapper[4624]: I0228 04:52:02.496806 4624 generic.go:334] "Generic (PLEG): container finished" podID="208daee5-4e19-43c4-8fc1-52a1258d20fc" containerID="a3b636bf92d2764f515a416990e91c2170c94883fbcff863decad8e969317534" exitCode=0 Feb 28 04:52:02 crc kubenswrapper[4624]: I0228 04:52:02.497003 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537572-7fn59" event={"ID":"208daee5-4e19-43c4-8fc1-52a1258d20fc","Type":"ContainerDied","Data":"a3b636bf92d2764f515a416990e91c2170c94883fbcff863decad8e969317534"} Feb 28 04:52:03 crc kubenswrapper[4624]: I0228 04:52:03.854376 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:03 crc kubenswrapper[4624]: I0228 04:52:03.929807 4624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt26k\" (UniqueName: \"kubernetes.io/projected/208daee5-4e19-43c4-8fc1-52a1258d20fc-kube-api-access-kt26k\") pod \"208daee5-4e19-43c4-8fc1-52a1258d20fc\" (UID: \"208daee5-4e19-43c4-8fc1-52a1258d20fc\") " Feb 28 04:52:03 crc kubenswrapper[4624]: I0228 04:52:03.938967 4624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208daee5-4e19-43c4-8fc1-52a1258d20fc-kube-api-access-kt26k" (OuterVolumeSpecName: "kube-api-access-kt26k") pod "208daee5-4e19-43c4-8fc1-52a1258d20fc" (UID: "208daee5-4e19-43c4-8fc1-52a1258d20fc"). InnerVolumeSpecName "kube-api-access-kt26k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:04 crc kubenswrapper[4624]: I0228 04:52:04.040476 4624 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt26k\" (UniqueName: \"kubernetes.io/projected/208daee5-4e19-43c4-8fc1-52a1258d20fc-kube-api-access-kt26k\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:04 crc kubenswrapper[4624]: I0228 04:52:04.517526 4624 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537572-7fn59" event={"ID":"208daee5-4e19-43c4-8fc1-52a1258d20fc","Type":"ContainerDied","Data":"af9e4ab3e064d3103d42d88946f21451db583e432e27e6b91afbe79d597cecc5"} Feb 28 04:52:04 crc kubenswrapper[4624]: I0228 04:52:04.517582 4624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9e4ab3e064d3103d42d88946f21451db583e432e27e6b91afbe79d597cecc5" Feb 28 04:52:04 crc kubenswrapper[4624]: I0228 04:52:04.517596 4624 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-7fn59" Feb 28 04:52:04 crc kubenswrapper[4624]: I0228 04:52:04.928068 4624 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-sqrzf"] Feb 28 04:52:04 crc kubenswrapper[4624]: I0228 04:52:04.938578 4624 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-sqrzf"] Feb 28 04:52:06 crc kubenswrapper[4624]: I0228 04:52:06.108320 4624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb5e204-101e-4af8-8b9c-263fc0c81b87" path="/var/lib/kubelet/pods/ffb5e204-101e-4af8-8b9c-263fc0c81b87/volumes" Feb 28 04:52:08 crc kubenswrapper[4624]: I0228 04:52:08.087792 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:52:08 crc kubenswrapper[4624]: E0228 04:52:08.088401 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:52:23 crc kubenswrapper[4624]: I0228 04:52:23.087583 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:52:23 crc kubenswrapper[4624]: E0228 04:52:23.088281 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:52:35 crc kubenswrapper[4624]: I0228 04:52:35.087329 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:52:35 crc kubenswrapper[4624]: E0228 04:52:35.089116 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491" Feb 28 04:52:47 crc kubenswrapper[4624]: I0228 04:52:47.087536 4624 scope.go:117] "RemoveContainer" containerID="d024e07cb7111c445ba9c724ed51b9236596d6259adae020dd5c796e55632bf4" Feb 28 04:52:47 crc kubenswrapper[4624]: E0228 04:52:47.088429 4624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mbfnv_openshift-machine-config-operator(a8ccd115-f935-454b-94cc-26327d5df491)\"" pod="openshift-machine-config-operator/machine-config-daemon-mbfnv" podUID="a8ccd115-f935-454b-94cc-26327d5df491"